00:00:00.001 Started by upstream project "autotest-per-patch" build number 132803 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.024 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.025 The recommended git tool is: git 00:00:00.026 using credential 00000000-0000-0000-0000-000000000002 00:00:00.028 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.042 Fetching changes from the remote Git repository 00:00:00.044 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.065 Using shallow fetch with depth 1 00:00:00.065 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.065 > git --version # timeout=10 00:00:00.098 > git --version # 'git version 2.39.2' 00:00:00.098 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.137 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.137 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.952 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.965 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.979 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.979 > git config core.sparsecheckout # timeout=10 00:00:02.991 > git read-tree -mu HEAD # timeout=10 00:00:03.010 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.034 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.034 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:03.223 [Pipeline] Start of Pipeline 00:00:03.234 [Pipeline] library 00:00:03.235 Loading library shm_lib@master 00:00:03.236 Library shm_lib@master is cached. Copying from home. 00:00:03.250 [Pipeline] node 00:00:03.263 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:03.264 [Pipeline] { 00:00:03.274 [Pipeline] catchError 00:00:03.276 [Pipeline] { 00:00:03.289 [Pipeline] wrap 00:00:03.298 [Pipeline] { 00:00:03.306 [Pipeline] stage 00:00:03.308 [Pipeline] { (Prologue) 00:00:03.326 [Pipeline] echo 00:00:03.328 Node: VM-host-WFP7 00:00:03.334 [Pipeline] cleanWs 00:00:03.344 [WS-CLEANUP] Deleting project workspace... 00:00:03.344 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.351 [WS-CLEANUP] done 00:00:03.585 [Pipeline] setCustomBuildProperty 00:00:03.684 [Pipeline] httpRequest 00:00:04.074 [Pipeline] echo 00:00:04.075 Sorcerer 10.211.164.112 is alive 00:00:04.083 [Pipeline] retry 00:00:04.085 [Pipeline] { 00:00:04.095 [Pipeline] httpRequest 00:00:04.099 HttpMethod: GET 00:00:04.100 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.100 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.102 Response Code: HTTP/1.1 200 OK 00:00:04.102 Success: Status code 200 is in the accepted range: 200,404 00:00:04.103 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.248 [Pipeline] } 00:00:04.264 [Pipeline] // retry 00:00:04.270 [Pipeline] sh 00:00:04.553 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.568 [Pipeline] httpRequest 00:00:04.954 [Pipeline] echo 00:00:04.955 Sorcerer 10.211.164.112 is alive 00:00:04.964 [Pipeline] retry 00:00:04.965 [Pipeline] { 00:00:04.978 [Pipeline] httpRequest 00:00:04.982 HttpMethod: GET 00:00:04.983 URL: http://10.211.164.112/packages/spdk_805149865615b53d95323f8dff48d360219afb4e.tar.gz 00:00:04.983 Sending request to url: http://10.211.164.112/packages/spdk_805149865615b53d95323f8dff48d360219afb4e.tar.gz 00:00:04.986 Response Code: HTTP/1.1 200 OK 00:00:04.986 Success: Status code 200 is in the accepted range: 200,404 00:00:04.987 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_805149865615b53d95323f8dff48d360219afb4e.tar.gz 00:00:22.231 [Pipeline] } 00:00:22.240 [Pipeline] // retry 00:00:22.244 [Pipeline] sh 00:00:22.525 + tar --no-same-owner -xf spdk_805149865615b53d95323f8dff48d360219afb4e.tar.gz 00:00:25.091 [Pipeline] sh 00:00:25.378 + git -C spdk log --oneline -n5 00:00:25.378 805149865 build: use VERSION file for storing version 00:00:25.378 a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:00:25.378 a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair 00:00:25.378 2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting 00:00:25.378 e2dfdf06c accel/mlx5: Register post_poller handler 00:00:25.398 [Pipeline] writeFile 00:00:25.414 [Pipeline] sh 00:00:25.730 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:25.742 [Pipeline] sh 00:00:26.028 + cat autorun-spdk.conf 00:00:26.028 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:26.028 SPDK_RUN_ASAN=1 00:00:26.028 SPDK_RUN_UBSAN=1 00:00:26.028 SPDK_TEST_RAID=1 00:00:26.028 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:26.036 RUN_NIGHTLY=0 00:00:26.038 [Pipeline] } 00:00:26.051 [Pipeline] // stage 00:00:26.065 [Pipeline] stage 00:00:26.067 [Pipeline] { (Run VM) 00:00:26.080 [Pipeline] sh 00:00:26.372 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:26.372 + echo 'Start stage prepare_nvme.sh' 00:00:26.372 Start stage prepare_nvme.sh 00:00:26.372 + [[ -n 3 ]] 00:00:26.372 + disk_prefix=ex3 00:00:26.372 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:00:26.372 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:00:26.372 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:00:26.372 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:26.372 ++ SPDK_RUN_ASAN=1 00:00:26.372 ++ SPDK_RUN_UBSAN=1 00:00:26.372 ++ SPDK_TEST_RAID=1 00:00:26.372 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:26.372 ++ RUN_NIGHTLY=0 00:00:26.372 + cd /var/jenkins/workspace/raid-vg-autotest 00:00:26.372 + nvme_files=() 00:00:26.372 + declare -A nvme_files 00:00:26.372 + backend_dir=/var/lib/libvirt/images/backends 00:00:26.372 + nvme_files['nvme.img']=5G 00:00:26.372 + nvme_files['nvme-cmb.img']=5G 00:00:26.372 + nvme_files['nvme-multi0.img']=4G 00:00:26.372 + nvme_files['nvme-multi1.img']=4G 00:00:26.372 + nvme_files['nvme-multi2.img']=4G 00:00:26.372 + nvme_files['nvme-openstack.img']=8G 00:00:26.372 + nvme_files['nvme-zns.img']=5G 00:00:26.372 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:26.372 + (( SPDK_TEST_FTL == 1 )) 00:00:26.372 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:26.372 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:26.372 + for nvme in "${!nvme_files[@]}" 00:00:26.372 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:00:26.372 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:26.372 + for nvme in "${!nvme_files[@]}" 00:00:26.372 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:00:26.372 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:26.372 + for nvme in "${!nvme_files[@]}" 00:00:26.372 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:00:26.372 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:26.372 + for nvme in "${!nvme_files[@]}" 00:00:26.372 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:00:26.372 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:26.372 + for nvme in "${!nvme_files[@]}" 00:00:26.372 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:00:26.372 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:26.372 + for nvme in "${!nvme_files[@]}" 00:00:26.372 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:00:26.372 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:26.372 + for nvme in "${!nvme_files[@]}" 00:00:26.372 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:00:26.633 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:26.633 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:00:26.633 + echo 'End stage prepare_nvme.sh' 00:00:26.633 End stage prepare_nvme.sh 00:00:26.646 [Pipeline] sh 00:00:26.933 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:26.933 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora39 00:00:26.933 00:00:26.933 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:00:26.933 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:00:26.933 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:00:26.933 HELP=0 00:00:26.933 DRY_RUN=0 00:00:26.933 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:00:26.933 NVME_DISKS_TYPE=nvme,nvme, 00:00:26.933 NVME_AUTO_CREATE=0 00:00:26.933 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:00:26.933 NVME_CMB=,, 00:00:26.933 NVME_PMR=,, 00:00:26.933 NVME_ZNS=,, 00:00:26.933 NVME_MS=,, 00:00:26.933 NVME_FDP=,, 00:00:26.933 SPDK_VAGRANT_DISTRO=fedora39 00:00:26.933 SPDK_VAGRANT_VMCPU=10 00:00:26.933 SPDK_VAGRANT_VMRAM=12288 00:00:26.933 SPDK_VAGRANT_PROVIDER=libvirt 00:00:26.933 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:26.933 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:26.933 SPDK_OPENSTACK_NETWORK=0 00:00:26.933 VAGRANT_PACKAGE_BOX=0 00:00:26.933 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:26.933 FORCE_DISTRO=true 00:00:26.933 VAGRANT_BOX_VERSION= 00:00:26.933 EXTRA_VAGRANTFILES= 00:00:26.933 NIC_MODEL=virtio 00:00:26.933 00:00:26.933 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:00:26.933 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:00:28.844 Bringing machine 'default' up with 'libvirt' provider... 00:00:29.414 ==> default: Creating image (snapshot of base box volume). 00:00:29.414 ==> default: Creating domain with the following settings... 00:00:29.414 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733754787_655b9da9e04d9c4f59dc 00:00:29.414 ==> default: -- Domain type: kvm 00:00:29.414 ==> default: -- Cpus: 10 00:00:29.414 ==> default: -- Feature: acpi 00:00:29.414 ==> default: -- Feature: apic 00:00:29.414 ==> default: -- Feature: pae 00:00:29.414 ==> default: -- Memory: 12288M 00:00:29.414 ==> default: -- Memory Backing: hugepages: 00:00:29.414 ==> default: -- Management MAC: 00:00:29.414 ==> default: -- Loader: 00:00:29.414 ==> default: -- Nvram: 00:00:29.414 ==> default: -- Base box: spdk/fedora39 00:00:29.414 ==> default: -- Storage pool: default 00:00:29.414 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733754787_655b9da9e04d9c4f59dc.img (20G) 00:00:29.414 ==> default: -- Volume Cache: default 00:00:29.414 ==> default: -- Kernel: 00:00:29.414 ==> default: -- Initrd: 00:00:29.414 ==> default: -- Graphics Type: vnc 00:00:29.414 ==> default: -- Graphics Port: -1 00:00:29.414 ==> default: -- Graphics IP: 127.0.0.1 00:00:29.414 ==> default: -- Graphics Password: Not defined 00:00:29.414 ==> default: -- Video Type: cirrus 00:00:29.414 ==> default: -- Video VRAM: 9216 00:00:29.414 ==> default: -- Sound Type: 00:00:29.414 ==> default: -- Keymap: en-us 00:00:29.414 ==> default: -- TPM Path: 00:00:29.414 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:29.414 ==> default: -- Command line args: 00:00:29.414 ==> default: -> value=-device, 00:00:29.414 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:29.414 ==> default: -> value=-drive, 00:00:29.414 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:00:29.414 ==> default: -> value=-device, 00:00:29.414 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:29.414 ==> default: -> value=-device, 00:00:29.414 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:29.414 ==> default: -> value=-drive, 00:00:29.414 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:29.414 ==> default: -> value=-device, 00:00:29.414 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:29.414 ==> default: -> value=-drive, 00:00:29.414 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:29.414 ==> default: -> value=-device, 00:00:29.414 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:29.414 ==> default: -> value=-drive, 00:00:29.414 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:29.414 ==> default: -> value=-device, 00:00:29.414 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:29.414 ==> default: Creating shared folders metadata... 00:00:29.414 ==> default: Starting domain. 00:00:30.799 ==> default: Waiting for domain to get an IP address... 00:00:48.899 ==> default: Waiting for SSH to become available... 00:00:48.899 ==> default: Configuring and enabling network interfaces... 00:00:54.188 default: SSH address: 192.168.121.203:22 00:00:54.188 default: SSH username: vagrant 00:00:54.188 default: SSH auth method: private key 00:00:56.727 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:04.853 ==> default: Mounting SSHFS shared folder... 00:01:07.394 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:07.394 ==> default: Checking Mount.. 00:01:09.301 ==> default: Folder Successfully Mounted! 00:01:09.301 ==> default: Running provisioner: file... 00:01:10.240 default: ~/.gitconfig => .gitconfig 00:01:10.810 00:01:10.810 SUCCESS! 00:01:10.810 00:01:10.810 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:10.810 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:10.810 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:10.810 00:01:10.820 [Pipeline] } 00:01:10.834 [Pipeline] // stage 00:01:10.842 [Pipeline] dir 00:01:10.842 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:10.844 [Pipeline] { 00:01:10.855 [Pipeline] catchError 00:01:10.857 [Pipeline] { 00:01:10.868 [Pipeline] sh 00:01:11.150 + vagrant ssh-config --host vagrant 00:01:11.150 + sed -ne /^Host/,$p 00:01:11.150 + tee ssh_conf 00:01:13.688 Host vagrant 00:01:13.688 HostName 192.168.121.203 00:01:13.688 User vagrant 00:01:13.688 Port 22 00:01:13.688 UserKnownHostsFile /dev/null 00:01:13.688 StrictHostKeyChecking no 00:01:13.688 PasswordAuthentication no 00:01:13.688 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:13.688 IdentitiesOnly yes 00:01:13.688 LogLevel FATAL 00:01:13.688 ForwardAgent yes 00:01:13.688 ForwardX11 yes 00:01:13.688 00:01:13.703 [Pipeline] withEnv 00:01:13.705 [Pipeline] { 00:01:13.719 [Pipeline] sh 00:01:14.004 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:14.004 source /etc/os-release 00:01:14.004 [[ -e /image.version ]] && img=$(< /image.version) 00:01:14.004 # Minimal, systemd-like check. 00:01:14.004 if [[ -e /.dockerenv ]]; then 00:01:14.004 # Clear garbage from the node's name: 00:01:14.004 # agt-er_autotest_547-896 -> autotest_547-896 00:01:14.004 # $HOSTNAME is the actual container id 00:01:14.004 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:14.004 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:14.004 # We can assume this is a mount from a host where container is running, 00:01:14.004 # so fetch its hostname to easily identify the target swarm worker. 00:01:14.004 container="$(< /etc/hostname) ($agent)" 00:01:14.004 else 00:01:14.004 # Fallback 00:01:14.004 container=$agent 00:01:14.004 fi 00:01:14.004 fi 00:01:14.004 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:14.004 00:01:14.277 [Pipeline] } 00:01:14.294 [Pipeline] // withEnv 00:01:14.303 [Pipeline] setCustomBuildProperty 00:01:14.318 [Pipeline] stage 00:01:14.320 [Pipeline] { (Tests) 00:01:14.352 [Pipeline] sh 00:01:14.672 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:14.965 [Pipeline] sh 00:01:15.250 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:15.526 [Pipeline] timeout 00:01:15.527 Timeout set to expire in 1 hr 30 min 00:01:15.528 [Pipeline] { 00:01:15.543 [Pipeline] sh 00:01:15.828 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:16.398 HEAD is now at 805149865 build: use VERSION file for storing version 00:01:16.412 [Pipeline] sh 00:01:16.697 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:16.971 [Pipeline] sh 00:01:17.255 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:17.533 [Pipeline] sh 00:01:17.819 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:18.079 ++ readlink -f spdk_repo 00:01:18.079 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:18.079 + [[ -n /home/vagrant/spdk_repo ]] 00:01:18.079 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:18.079 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:18.079 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:18.079 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:18.079 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:18.079 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:18.079 + cd /home/vagrant/spdk_repo 00:01:18.079 + source /etc/os-release 00:01:18.079 ++ NAME='Fedora Linux' 00:01:18.079 ++ VERSION='39 (Cloud Edition)' 00:01:18.079 ++ ID=fedora 00:01:18.079 ++ VERSION_ID=39 00:01:18.079 ++ VERSION_CODENAME= 00:01:18.079 ++ PLATFORM_ID=platform:f39 00:01:18.079 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:18.079 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:18.079 ++ LOGO=fedora-logo-icon 00:01:18.079 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:18.079 ++ HOME_URL=https://fedoraproject.org/ 00:01:18.079 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:18.079 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:18.079 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:18.079 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:18.079 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:18.079 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:18.079 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:18.079 ++ SUPPORT_END=2024-11-12 00:01:18.079 ++ VARIANT='Cloud Edition' 00:01:18.079 ++ VARIANT_ID=cloud 00:01:18.079 + uname -a 00:01:18.080 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:18.080 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:18.650 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:18.650 Hugepages 00:01:18.650 node hugesize free / total 00:01:18.650 node0 1048576kB 0 / 0 00:01:18.650 node0 2048kB 0 / 0 00:01:18.650 00:01:18.650 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:18.650 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:18.650 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:18.650 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:18.650 + rm -f /tmp/spdk-ld-path 00:01:18.650 + source autorun-spdk.conf 00:01:18.650 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.650 ++ SPDK_RUN_ASAN=1 00:01:18.650 ++ SPDK_RUN_UBSAN=1 00:01:18.650 ++ SPDK_TEST_RAID=1 00:01:18.650 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:18.650 ++ RUN_NIGHTLY=0 00:01:18.910 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:18.910 + [[ -n '' ]] 00:01:18.910 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:18.910 + for M in /var/spdk/build-*-manifest.txt 00:01:18.910 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:18.910 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:18.910 + for M in /var/spdk/build-*-manifest.txt 00:01:18.910 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:18.910 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:18.910 + for M in /var/spdk/build-*-manifest.txt 00:01:18.910 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:18.910 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:18.910 ++ uname 00:01:18.910 + [[ Linux == \L\i\n\u\x ]] 00:01:18.910 + sudo dmesg -T 00:01:18.910 + sudo dmesg --clear 00:01:18.910 + dmesg_pid=5424 00:01:18.911 + [[ Fedora Linux == FreeBSD ]] 00:01:18.911 + sudo dmesg -Tw 00:01:18.911 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:18.911 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:18.911 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:18.911 + [[ -x /usr/src/fio-static/fio ]] 00:01:18.911 + export FIO_BIN=/usr/src/fio-static/fio 00:01:18.911 + FIO_BIN=/usr/src/fio-static/fio 00:01:18.911 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:18.911 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:18.911 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:18.911 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:18.911 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:18.911 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:18.911 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:18.911 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:18.911 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:19.171 14:33:57 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:19.171 14:33:57 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:19.171 14:33:57 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.171 14:33:57 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:19.171 14:33:57 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:19.171 14:33:57 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:19.171 14:33:57 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:19.171 14:33:57 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:01:19.171 14:33:57 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:19.171 14:33:57 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:19.171 Traceback (most recent call last): 00:01:19.171 File "/home/vagrant/spdk_repo/spdk/scripts/rpc.py", line 24, in 00:01:19.171 import spdk.rpc as rpc # noqa 00:01:19.171 ^^^^^^^^^^^^^^^^^^^^^^ 00:01:19.171 File "/home/vagrant/spdk_repo/spdk/python/spdk/__init__.py", line 5, in 00:01:19.171 from .version import __version__ 00:01:19.171 ModuleNotFoundError: No module named 'spdk.version' 00:01:19.171 14:33:57 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:19.171 14:33:57 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:19.171 14:33:57 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:19.171 14:33:57 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:19.171 14:33:57 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:19.171 14:33:57 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:19.171 14:33:57 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.171 14:33:57 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.171 14:33:57 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.171 14:33:57 -- paths/export.sh@5 -- $ export PATH 00:01:19.171 14:33:57 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.171 14:33:57 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:19.171 14:33:57 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:19.171 14:33:57 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733754837.XXXXXX 00:01:19.171 14:33:57 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733754837.QYodyN 00:01:19.171 14:33:57 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:19.171 14:33:57 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:19.171 14:33:57 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:19.171 14:33:57 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:19.171 14:33:57 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:19.171 14:33:57 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:19.171 14:33:57 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:19.171 14:33:57 -- common/autotest_common.sh@10 -- $ set +x 00:01:19.171 14:33:57 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:19.171 14:33:57 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:19.171 14:33:57 -- pm/common@17 -- $ local monitor 00:01:19.171 14:33:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:19.171 14:33:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:19.171 14:33:57 -- pm/common@25 -- $ sleep 1 00:01:19.171 14:33:57 -- pm/common@21 -- $ date +%s 00:01:19.171 14:33:57 -- pm/common@21 -- $ date +%s 00:01:19.171 14:33:57 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733754837 00:01:19.171 14:33:57 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733754837 00:01:19.171 Traceback (most recent call last): 00:01:19.171 File "/home/vagrant/spdk_repo/spdk/scripts/rpc.py", line 24, in 00:01:19.171 import spdk.rpc as rpc # noqa 00:01:19.171 ^^^^^^^^^^^^^^^^^^^^^^ 00:01:19.171 File "/home/vagrant/spdk_repo/spdk/python/spdk/__init__.py", line 5, in 00:01:19.171 from .version import __version__ 00:01:19.171 ModuleNotFoundError: No module named 'spdk.version' 00:01:19.171 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733754837_collect-cpu-load.pm.log 00:01:19.171 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733754837_collect-vmstat.pm.log 00:01:20.110 14:33:58 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:20.110 14:33:58 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:20.110 14:33:58 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:20.111 14:33:58 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:20.111 14:33:58 -- spdk/autobuild.sh@16 -- $ date -u 00:01:20.111 Mon Dec 9 02:33:58 PM UTC 2024 00:01:20.111 14:33:58 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:20.370 v25.01-pre-304-g805149865 00:01:20.370 14:33:58 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:20.370 14:33:58 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:20.370 14:33:58 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:20.370 14:33:58 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:20.370 14:33:58 -- common/autotest_common.sh@10 -- $ set +x 00:01:20.370 ************************************ 00:01:20.370 START TEST asan 00:01:20.370 ************************************ 00:01:20.370 using asan 00:01:20.370 14:33:58 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:20.370 00:01:20.370 real 0m0.001s 00:01:20.370 user 0m0.001s 00:01:20.370 sys 0m0.000s 00:01:20.370 14:33:58 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:20.370 14:33:58 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:20.370 ************************************ 00:01:20.370 END TEST asan 00:01:20.370 ************************************ 00:01:20.370 14:33:58 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:20.370 14:33:58 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:20.370 14:33:58 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:20.370 14:33:58 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:20.370 14:33:58 -- common/autotest_common.sh@10 -- $ set +x 00:01:20.370 ************************************ 00:01:20.370 START TEST ubsan 00:01:20.370 ************************************ 00:01:20.370 using ubsan 00:01:20.370 14:33:58 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:20.370 00:01:20.370 real 0m0.001s 00:01:20.370 user 0m0.000s 00:01:20.370 sys 0m0.001s 00:01:20.370 14:33:58 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:20.370 14:33:58 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:20.370 ************************************ 00:01:20.370 END TEST ubsan 00:01:20.370 ************************************ 00:01:20.370 14:33:58 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:20.370 14:33:58 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:20.370 14:33:58 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:20.370 14:33:58 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:20.370 14:33:58 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:20.370 14:33:58 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:20.370 14:33:58 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:20.370 14:33:58 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:20.371 14:33:58 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:20.630 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:20.630 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:21.198 Using 'verbs' RDMA provider 00:01:40.243 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:55.156 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:55.156 Creating mk/config.mk...done. 00:01:55.156 Creating mk/cc.flags.mk...done. 00:01:55.156 Type 'make' to build. 00:01:55.156 14:34:31 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:55.156 14:34:31 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:55.156 14:34:31 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:55.156 14:34:31 -- common/autotest_common.sh@10 -- $ set +x 00:01:55.156 ************************************ 00:01:55.156 START TEST make 00:01:55.156 ************************************ 00:01:55.156 14:34:31 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:05.141 The Meson build system 00:02:05.141 Version: 1.5.0 00:02:05.141 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:05.141 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:05.141 Build type: native build 00:02:05.141 Program cat found: YES (/usr/bin/cat) 00:02:05.141 Project name: DPDK 00:02:05.141 Project version: 24.03.0 00:02:05.141 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:05.141 C linker for the host machine: cc ld.bfd 2.40-14 00:02:05.141 Host machine cpu family: x86_64 00:02:05.141 Host machine cpu: x86_64 00:02:05.141 Message: ## Building in Developer Mode ## 00:02:05.141 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:05.141 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:05.141 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:05.141 Program python3 found: YES (/usr/bin/python3) 00:02:05.141 Program cat found: YES (/usr/bin/cat) 00:02:05.141 Compiler for C supports arguments -march=native: YES 00:02:05.141 Checking for size of "void *" : 8 00:02:05.141 Checking for size of "void *" : 8 (cached) 00:02:05.141 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:05.141 Library m found: YES 00:02:05.141 Library numa found: YES 00:02:05.141 Has header "numaif.h" : YES 00:02:05.141 Library fdt found: NO 00:02:05.141 Library execinfo found: NO 00:02:05.141 Has header "execinfo.h" : YES 00:02:05.141 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:05.141 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:05.141 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:05.141 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:05.141 Run-time dependency openssl found: YES 3.1.1 00:02:05.141 Run-time dependency libpcap found: YES 1.10.4 00:02:05.141 Has header "pcap.h" with dependency libpcap: YES 00:02:05.141 Compiler for C supports arguments -Wcast-qual: YES 00:02:05.141 Compiler for C supports arguments -Wdeprecated: YES 00:02:05.141 Compiler for C supports arguments -Wformat: YES 00:02:05.141 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:05.141 Compiler for C supports arguments -Wformat-security: NO 00:02:05.141 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:05.141 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:05.141 Compiler for C supports arguments -Wnested-externs: YES 00:02:05.141 Compiler for C supports arguments -Wold-style-definition: YES 00:02:05.141 Compiler for C supports arguments -Wpointer-arith: YES 00:02:05.141 Compiler for C supports arguments -Wsign-compare: YES 00:02:05.141 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:05.141 Compiler for C supports arguments -Wundef: YES 00:02:05.141 Compiler for C supports arguments -Wwrite-strings: YES 00:02:05.141 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:05.141 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:05.141 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:05.141 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:05.141 Program objdump found: YES (/usr/bin/objdump) 00:02:05.141 Compiler for C supports arguments -mavx512f: YES 00:02:05.141 Checking if "AVX512 checking" compiles: YES 00:02:05.141 Fetching value of define "__SSE4_2__" : 1 00:02:05.141 Fetching value of define "__AES__" : 1 00:02:05.141 Fetching value of define "__AVX__" : 1 00:02:05.141 Fetching value of define "__AVX2__" : 1 00:02:05.141 Fetching value of define "__AVX512BW__" : 1 00:02:05.141 Fetching value of define "__AVX512CD__" : 1 00:02:05.141 Fetching value of define "__AVX512DQ__" : 1 00:02:05.141 Fetching value of define "__AVX512F__" : 1 00:02:05.141 Fetching value of define "__AVX512VL__" : 1 00:02:05.141 Fetching value of define "__PCLMUL__" : 1 00:02:05.141 Fetching value of define "__RDRND__" : 1 00:02:05.141 Fetching value of define "__RDSEED__" : 1 00:02:05.141 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:05.141 Fetching value of define "__znver1__" : (undefined) 00:02:05.141 Fetching value of define "__znver2__" : (undefined) 00:02:05.141 Fetching value of define "__znver3__" : (undefined) 00:02:05.141 Fetching value of define "__znver4__" : (undefined) 00:02:05.141 Library asan found: YES 00:02:05.141 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:05.141 Message: lib/log: Defining dependency "log" 00:02:05.141 Message: lib/kvargs: Defining dependency "kvargs" 00:02:05.141 Message: lib/telemetry: Defining dependency "telemetry" 00:02:05.141 Library rt found: YES 00:02:05.141 Checking for function "getentropy" : NO 00:02:05.141 Message: lib/eal: Defining dependency "eal" 00:02:05.141 Message: lib/ring: Defining dependency "ring" 00:02:05.141 Message: lib/rcu: Defining dependency "rcu" 00:02:05.141 Message: lib/mempool: Defining dependency "mempool" 00:02:05.141 Message: lib/mbuf: Defining dependency "mbuf" 00:02:05.141 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:05.141 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:05.141 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:05.141 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:05.141 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:05.141 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:05.141 Compiler for C supports arguments -mpclmul: YES 00:02:05.141 Compiler for C supports arguments -maes: YES 00:02:05.141 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:05.141 Compiler for C supports arguments -mavx512bw: YES 00:02:05.141 Compiler for C supports arguments -mavx512dq: YES 00:02:05.141 Compiler for C supports arguments -mavx512vl: YES 00:02:05.141 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:05.141 Compiler for C supports arguments -mavx2: YES 00:02:05.141 Compiler for C supports arguments -mavx: YES 00:02:05.141 Message: lib/net: Defining dependency "net" 00:02:05.141 Message: lib/meter: Defining dependency "meter" 00:02:05.141 Message: lib/ethdev: Defining dependency "ethdev" 00:02:05.141 Message: lib/pci: Defining dependency "pci" 00:02:05.141 Message: lib/cmdline: Defining dependency "cmdline" 00:02:05.141 Message: lib/hash: Defining dependency "hash" 00:02:05.141 Message: lib/timer: Defining dependency "timer" 00:02:05.141 Message: lib/compressdev: Defining dependency "compressdev" 00:02:05.141 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:05.141 Message: lib/dmadev: Defining dependency "dmadev" 00:02:05.142 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:05.142 Message: lib/power: Defining dependency "power" 00:02:05.142 Message: lib/reorder: Defining dependency "reorder" 00:02:05.142 Message: lib/security: Defining dependency "security" 00:02:05.142 Has header "linux/userfaultfd.h" : YES 00:02:05.142 Has header "linux/vduse.h" : YES 00:02:05.142 Message: lib/vhost: Defining dependency "vhost" 00:02:05.142 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:05.142 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:05.142 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:05.142 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:05.142 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:05.142 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:05.142 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:05.142 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:05.142 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:05.142 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:05.142 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:05.142 Configuring doxy-api-html.conf using configuration 00:02:05.142 Configuring doxy-api-man.conf using configuration 00:02:05.142 Program mandb found: YES (/usr/bin/mandb) 00:02:05.142 Program sphinx-build found: NO 00:02:05.142 Configuring rte_build_config.h using configuration 00:02:05.142 Message: 00:02:05.142 ================= 00:02:05.142 Applications Enabled 00:02:05.142 ================= 00:02:05.142 00:02:05.142 apps: 00:02:05.142 00:02:05.142 00:02:05.142 Message: 00:02:05.142 ================= 00:02:05.142 Libraries Enabled 00:02:05.142 ================= 00:02:05.142 00:02:05.142 libs: 00:02:05.142 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:05.142 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:05.142 cryptodev, dmadev, power, reorder, security, vhost, 00:02:05.142 00:02:05.142 Message: 00:02:05.142 =============== 00:02:05.142 Drivers Enabled 00:02:05.142 =============== 00:02:05.142 00:02:05.142 common: 00:02:05.142 00:02:05.142 bus: 00:02:05.142 pci, vdev, 00:02:05.142 mempool: 00:02:05.142 ring, 00:02:05.142 dma: 00:02:05.142 00:02:05.142 net: 00:02:05.142 00:02:05.142 crypto: 00:02:05.142 00:02:05.142 compress: 00:02:05.142 00:02:05.142 vdpa: 00:02:05.142 00:02:05.142 00:02:05.142 Message: 00:02:05.142 ================= 00:02:05.142 Content Skipped 00:02:05.142 ================= 00:02:05.142 00:02:05.142 apps: 00:02:05.142 dumpcap: explicitly disabled via build config 00:02:05.142 graph: explicitly disabled via build config 00:02:05.142 pdump: explicitly disabled via build config 00:02:05.142 proc-info: explicitly disabled via build config 00:02:05.142 test-acl: explicitly disabled via build config 00:02:05.142 test-bbdev: explicitly disabled via build config 00:02:05.142 test-cmdline: explicitly disabled via build config 00:02:05.142 test-compress-perf: explicitly disabled via build config 00:02:05.142 test-crypto-perf: explicitly disabled via build config 00:02:05.142 test-dma-perf: explicitly disabled via build config 00:02:05.142 test-eventdev: explicitly disabled via build config 00:02:05.142 test-fib: explicitly disabled via build config 00:02:05.142 test-flow-perf: explicitly disabled via build config 00:02:05.142 test-gpudev: explicitly disabled via build config 00:02:05.142 test-mldev: explicitly disabled via build config 00:02:05.142 test-pipeline: explicitly disabled via build config 00:02:05.142 test-pmd: explicitly disabled via build config 00:02:05.142 test-regex: explicitly disabled via build config 00:02:05.142 test-sad: explicitly disabled via build config 00:02:05.142 test-security-perf: explicitly disabled via build config 00:02:05.142 00:02:05.142 libs: 00:02:05.142 argparse: explicitly disabled via build config 00:02:05.142 metrics: explicitly disabled via build config 00:02:05.142 acl: explicitly disabled via build config 00:02:05.142 bbdev: explicitly disabled via build config 00:02:05.142 bitratestats: explicitly disabled via build config 00:02:05.142 bpf: explicitly disabled via build config 00:02:05.142 cfgfile: explicitly disabled via build config 00:02:05.142 distributor: explicitly disabled via build config 00:02:05.142 efd: explicitly disabled via build config 00:02:05.142 eventdev: explicitly disabled via build config 00:02:05.142 dispatcher: explicitly disabled via build config 00:02:05.142 gpudev: explicitly disabled via build config 00:02:05.142 gro: explicitly disabled via build config 00:02:05.142 gso: explicitly disabled via build config 00:02:05.142 ip_frag: explicitly disabled via build config 00:02:05.142 jobstats: explicitly disabled via build config 00:02:05.142 latencystats: explicitly disabled via build config 00:02:05.142 lpm: explicitly disabled via build config 00:02:05.142 member: explicitly disabled via build config 00:02:05.142 pcapng: explicitly disabled via build config 00:02:05.142 rawdev: explicitly disabled via build config 00:02:05.142 regexdev: explicitly disabled via build config 00:02:05.142 mldev: explicitly disabled via build config 00:02:05.142 rib: explicitly disabled via build config 00:02:05.142 sched: explicitly disabled via build config 00:02:05.142 stack: explicitly disabled via build config 00:02:05.142 ipsec: explicitly disabled via build config 00:02:05.142 pdcp: explicitly disabled via build config 00:02:05.142 fib: explicitly disabled via build config 00:02:05.142 port: explicitly disabled via build config 00:02:05.142 pdump: explicitly disabled via build config 00:02:05.142 table: explicitly disabled via build config 00:02:05.142 pipeline: explicitly disabled via build config 00:02:05.142 graph: explicitly disabled via build config 00:02:05.142 node: explicitly disabled via build config 00:02:05.142 00:02:05.142 drivers: 00:02:05.142 common/cpt: not in enabled drivers build config 00:02:05.142 common/dpaax: not in enabled drivers build config 00:02:05.142 common/iavf: not in enabled drivers build config 00:02:05.142 common/idpf: not in enabled drivers build config 00:02:05.142 common/ionic: not in enabled drivers build config 00:02:05.142 common/mvep: not in enabled drivers build config 00:02:05.142 common/octeontx: not in enabled drivers build config 00:02:05.142 bus/auxiliary: not in enabled drivers build config 00:02:05.142 bus/cdx: not in enabled drivers build config 00:02:05.142 bus/dpaa: not in enabled drivers build config 00:02:05.142 bus/fslmc: not in enabled drivers build config 00:02:05.142 bus/ifpga: not in enabled drivers build config 00:02:05.142 bus/platform: not in enabled drivers build config 00:02:05.142 bus/uacce: not in enabled drivers build config 00:02:05.142 bus/vmbus: not in enabled drivers build config 00:02:05.142 common/cnxk: not in enabled drivers build config 00:02:05.142 common/mlx5: not in enabled drivers build config 00:02:05.142 common/nfp: not in enabled drivers build config 00:02:05.142 common/nitrox: not in enabled drivers build config 00:02:05.142 common/qat: not in enabled drivers build config 00:02:05.142 common/sfc_efx: not in enabled drivers build config 00:02:05.142 mempool/bucket: not in enabled drivers build config 00:02:05.142 mempool/cnxk: not in enabled drivers build config 00:02:05.142 mempool/dpaa: not in enabled drivers build config 00:02:05.142 mempool/dpaa2: not in enabled drivers build config 00:02:05.142 mempool/octeontx: not in enabled drivers build config 00:02:05.142 mempool/stack: not in enabled drivers build config 00:02:05.142 dma/cnxk: not in enabled drivers build config 00:02:05.142 dma/dpaa: not in enabled drivers build config 00:02:05.142 dma/dpaa2: not in enabled drivers build config 00:02:05.142 dma/hisilicon: not in enabled drivers build config 00:02:05.142 dma/idxd: not in enabled drivers build config 00:02:05.142 dma/ioat: not in enabled drivers build config 00:02:05.142 dma/skeleton: not in enabled drivers build config 00:02:05.142 net/af_packet: not in enabled drivers build config 00:02:05.142 net/af_xdp: not in enabled drivers build config 00:02:05.142 net/ark: not in enabled drivers build config 00:02:05.142 net/atlantic: not in enabled drivers build config 00:02:05.142 net/avp: not in enabled drivers build config 00:02:05.142 net/axgbe: not in enabled drivers build config 00:02:05.142 net/bnx2x: not in enabled drivers build config 00:02:05.142 net/bnxt: not in enabled drivers build config 00:02:05.142 net/bonding: not in enabled drivers build config 00:02:05.142 net/cnxk: not in enabled drivers build config 00:02:05.142 net/cpfl: not in enabled drivers build config 00:02:05.142 net/cxgbe: not in enabled drivers build config 00:02:05.142 net/dpaa: not in enabled drivers build config 00:02:05.142 net/dpaa2: not in enabled drivers build config 00:02:05.142 net/e1000: not in enabled drivers build config 00:02:05.142 net/ena: not in enabled drivers build config 00:02:05.142 net/enetc: not in enabled drivers build config 00:02:05.142 net/enetfec: not in enabled drivers build config 00:02:05.142 net/enic: not in enabled drivers build config 00:02:05.142 net/failsafe: not in enabled drivers build config 00:02:05.142 net/fm10k: not in enabled drivers build config 00:02:05.142 net/gve: not in enabled drivers build config 00:02:05.142 net/hinic: not in enabled drivers build config 00:02:05.142 net/hns3: not in enabled drivers build config 00:02:05.142 net/i40e: not in enabled drivers build config 00:02:05.142 net/iavf: not in enabled drivers build config 00:02:05.142 net/ice: not in enabled drivers build config 00:02:05.142 net/idpf: not in enabled drivers build config 00:02:05.142 net/igc: not in enabled drivers build config 00:02:05.142 net/ionic: not in enabled drivers build config 00:02:05.142 net/ipn3ke: not in enabled drivers build config 00:02:05.142 net/ixgbe: not in enabled drivers build config 00:02:05.142 net/mana: not in enabled drivers build config 00:02:05.142 net/memif: not in enabled drivers build config 00:02:05.142 net/mlx4: not in enabled drivers build config 00:02:05.142 net/mlx5: not in enabled drivers build config 00:02:05.142 net/mvneta: not in enabled drivers build config 00:02:05.142 net/mvpp2: not in enabled drivers build config 00:02:05.142 net/netvsc: not in enabled drivers build config 00:02:05.142 net/nfb: not in enabled drivers build config 00:02:05.142 net/nfp: not in enabled drivers build config 00:02:05.142 net/ngbe: not in enabled drivers build config 00:02:05.142 net/null: not in enabled drivers build config 00:02:05.142 net/octeontx: not in enabled drivers build config 00:02:05.142 net/octeon_ep: not in enabled drivers build config 00:02:05.142 net/pcap: not in enabled drivers build config 00:02:05.142 net/pfe: not in enabled drivers build config 00:02:05.143 net/qede: not in enabled drivers build config 00:02:05.143 net/ring: not in enabled drivers build config 00:02:05.143 net/sfc: not in enabled drivers build config 00:02:05.143 net/softnic: not in enabled drivers build config 00:02:05.143 net/tap: not in enabled drivers build config 00:02:05.143 net/thunderx: not in enabled drivers build config 00:02:05.143 net/txgbe: not in enabled drivers build config 00:02:05.143 net/vdev_netvsc: not in enabled drivers build config 00:02:05.143 net/vhost: not in enabled drivers build config 00:02:05.143 net/virtio: not in enabled drivers build config 00:02:05.143 net/vmxnet3: not in enabled drivers build config 00:02:05.143 raw/*: missing internal dependency, "rawdev" 00:02:05.143 crypto/armv8: not in enabled drivers build config 00:02:05.143 crypto/bcmfs: not in enabled drivers build config 00:02:05.143 crypto/caam_jr: not in enabled drivers build config 00:02:05.143 crypto/ccp: not in enabled drivers build config 00:02:05.143 crypto/cnxk: not in enabled drivers build config 00:02:05.143 crypto/dpaa_sec: not in enabled drivers build config 00:02:05.143 crypto/dpaa2_sec: not in enabled drivers build config 00:02:05.143 crypto/ipsec_mb: not in enabled drivers build config 00:02:05.143 crypto/mlx5: not in enabled drivers build config 00:02:05.143 crypto/mvsam: not in enabled drivers build config 00:02:05.143 crypto/nitrox: not in enabled drivers build config 00:02:05.143 crypto/null: not in enabled drivers build config 00:02:05.143 crypto/octeontx: not in enabled drivers build config 00:02:05.143 crypto/openssl: not in enabled drivers build config 00:02:05.143 crypto/scheduler: not in enabled drivers build config 00:02:05.143 crypto/uadk: not in enabled drivers build config 00:02:05.143 crypto/virtio: not in enabled drivers build config 00:02:05.143 compress/isal: not in enabled drivers build config 00:02:05.143 compress/mlx5: not in enabled drivers build config 00:02:05.143 compress/nitrox: not in enabled drivers build config 00:02:05.143 compress/octeontx: not in enabled drivers build config 00:02:05.143 compress/zlib: not in enabled drivers build config 00:02:05.143 regex/*: missing internal dependency, "regexdev" 00:02:05.143 ml/*: missing internal dependency, "mldev" 00:02:05.143 vdpa/ifc: not in enabled drivers build config 00:02:05.143 vdpa/mlx5: not in enabled drivers build config 00:02:05.143 vdpa/nfp: not in enabled drivers build config 00:02:05.143 vdpa/sfc: not in enabled drivers build config 00:02:05.143 event/*: missing internal dependency, "eventdev" 00:02:05.143 baseband/*: missing internal dependency, "bbdev" 00:02:05.143 gpu/*: missing internal dependency, "gpudev" 00:02:05.143 00:02:05.143 00:02:05.143 Build targets in project: 85 00:02:05.143 00:02:05.143 DPDK 24.03.0 00:02:05.143 00:02:05.143 User defined options 00:02:05.143 buildtype : debug 00:02:05.143 default_library : shared 00:02:05.143 libdir : lib 00:02:05.143 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:05.143 b_sanitize : address 00:02:05.143 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:05.143 c_link_args : 00:02:05.143 cpu_instruction_set: native 00:02:05.143 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:05.143 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:05.143 enable_docs : false 00:02:05.143 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:05.143 enable_kmods : false 00:02:05.143 max_lcores : 128 00:02:05.143 tests : false 00:02:05.143 00:02:05.143 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:05.143 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:05.402 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:05.402 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:05.402 [3/268] Linking static target lib/librte_kvargs.a 00:02:05.402 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:05.402 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:05.402 [6/268] Linking static target lib/librte_log.a 00:02:05.661 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:05.920 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:05.920 [9/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.920 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:05.920 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:05.920 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:05.920 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:05.920 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:05.920 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:06.178 [16/268] Linking static target lib/librte_telemetry.a 00:02:06.178 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:06.178 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:06.437 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.437 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:06.437 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:06.437 [22/268] Linking target lib/librte_log.so.24.1 00:02:06.437 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:06.437 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:06.696 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:06.696 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:06.696 [27/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:06.696 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:06.696 [29/268] Linking target lib/librte_kvargs.so.24.1 00:02:06.955 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:06.955 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:06.955 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.955 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:06.955 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:06.955 [35/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:06.955 [36/268] Linking target lib/librte_telemetry.so.24.1 00:02:07.214 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:07.214 [38/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:07.214 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:07.214 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:07.214 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:07.472 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:07.472 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:07.472 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:07.472 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:07.472 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:07.472 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:07.732 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:07.732 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:07.732 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:07.732 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:07.992 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:07.992 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:07.992 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:07.992 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:07.992 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:08.251 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:08.251 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:08.251 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:08.251 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:08.251 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:08.251 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:08.510 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:08.510 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:08.510 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:08.510 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:08.769 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:08.769 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:08.769 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:08.769 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:08.769 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:09.029 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:09.029 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:09.029 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:09.029 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:09.029 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:09.288 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:09.288 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:09.288 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:09.288 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:09.288 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:09.288 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:09.548 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:09.548 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:09.548 [85/268] Linking static target lib/librte_eal.a 00:02:09.548 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:09.807 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:09.807 [88/268] Linking static target lib/librte_ring.a 00:02:09.807 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:09.807 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:09.807 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:09.807 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:09.807 [93/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:09.807 [94/268] Linking static target lib/librte_mempool.a 00:02:09.807 [95/268] Linking static target lib/librte_rcu.a 00:02:09.807 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:10.075 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:10.334 [98/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.334 [99/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:10.334 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:10.334 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:10.334 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:10.334 [103/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.592 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:10.592 [105/268] Linking static target lib/librte_mbuf.a 00:02:10.592 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:10.592 [107/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:10.592 [108/268] Linking static target lib/librte_meter.a 00:02:10.592 [109/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:10.851 [110/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:10.851 [111/268] Linking static target lib/librte_net.a 00:02:10.851 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:11.109 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:11.109 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.109 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:11.109 [116/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.109 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:11.109 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.368 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:11.626 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:11.626 [121/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.626 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:11.885 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:11.885 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:12.146 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:12.146 [126/268] Linking static target lib/librte_pci.a 00:02:12.146 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:12.146 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:12.146 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:12.146 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:12.146 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:12.146 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:12.146 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:12.146 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:12.407 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:12.407 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:12.407 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:12.407 [138/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.407 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:12.407 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:12.407 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:12.407 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:12.407 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:12.407 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:12.666 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:12.666 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:12.666 [147/268] Linking static target lib/librte_cmdline.a 00:02:12.666 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:12.925 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:12.925 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:12.925 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:13.185 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:13.185 [153/268] Linking static target lib/librte_timer.a 00:02:13.185 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:13.185 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:13.445 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:13.445 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:13.445 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:13.445 [159/268] Linking static target lib/librte_hash.a 00:02:13.445 [160/268] Linking static target lib/librte_ethdev.a 00:02:13.445 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:13.704 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:13.704 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.704 [164/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:13.704 [165/268] Linking static target lib/librte_compressdev.a 00:02:13.704 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:13.965 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:13.965 [168/268] Linking static target lib/librte_dmadev.a 00:02:13.965 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:13.965 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:14.224 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:14.224 [172/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.224 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:14.484 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:14.484 [175/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:14.484 [176/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.484 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:14.484 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:14.744 [179/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.744 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:14.744 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:14.744 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.744 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:14.744 [184/268] Linking static target lib/librte_cryptodev.a 00:02:15.004 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:15.004 [186/268] Linking static target lib/librte_power.a 00:02:15.264 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:15.264 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:15.264 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:15.264 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:15.264 [191/268] Linking static target lib/librte_security.a 00:02:15.264 [192/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:15.264 [193/268] Linking static target lib/librte_reorder.a 00:02:15.524 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:15.788 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.049 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:16.049 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.049 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:16.049 [199/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.049 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:16.617 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:16.617 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:16.617 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:16.617 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:16.617 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:16.618 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:16.876 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:16.876 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:16.876 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:16.876 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:17.136 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.136 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:17.136 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:17.136 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:17.136 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:17.136 [216/268] Linking static target drivers/librte_bus_vdev.a 00:02:17.136 [217/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:17.136 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:17.136 [219/268] Linking static target drivers/librte_bus_pci.a 00:02:17.136 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:17.136 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:17.396 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:17.396 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:17.396 [224/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.396 [225/268] Linking static target drivers/librte_mempool_ring.a 00:02:17.396 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:17.655 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.033 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:19.973 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.973 [230/268] Linking target lib/librte_eal.so.24.1 00:02:20.232 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:20.232 [232/268] Linking target lib/librte_timer.so.24.1 00:02:20.232 [233/268] Linking target lib/librte_meter.so.24.1 00:02:20.232 [234/268] Linking target lib/librte_pci.so.24.1 00:02:20.232 [235/268] Linking target lib/librte_dmadev.so.24.1 00:02:20.232 [236/268] Linking target lib/librte_ring.so.24.1 00:02:20.232 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:20.232 [238/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:20.494 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:20.494 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:20.494 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:20.494 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:20.494 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:20.494 [244/268] Linking target lib/librte_mempool.so.24.1 00:02:20.494 [245/268] Linking target lib/librte_rcu.so.24.1 00:02:20.494 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:20.494 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:20.494 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:20.494 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:20.754 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:20.754 [251/268] Linking target lib/librte_compressdev.so.24.1 00:02:20.754 [252/268] Linking target lib/librte_reorder.so.24.1 00:02:20.754 [253/268] Linking target lib/librte_net.so.24.1 00:02:20.754 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:02:21.013 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:21.013 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:21.013 [257/268] Linking target lib/librte_cmdline.so.24.1 00:02:21.013 [258/268] Linking target lib/librte_hash.so.24.1 00:02:21.013 [259/268] Linking target lib/librte_security.so.24.1 00:02:21.013 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:21.968 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.968 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:22.248 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:22.248 [264/268] Linking target lib/librte_power.so.24.1 00:02:22.248 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:22.248 [266/268] Linking static target lib/librte_vhost.a 00:02:24.796 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.056 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:25.056 INFO: autodetecting backend as ninja 00:02:25.056 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:43.143 CC lib/log/log.o 00:02:43.143 CC lib/log/log_flags.o 00:02:43.143 CC lib/ut/ut.o 00:02:43.143 CC lib/log/log_deprecated.o 00:02:43.143 CC lib/ut_mock/mock.o 00:02:43.143 LIB libspdk_ut.a 00:02:43.143 LIB libspdk_log.a 00:02:43.143 LIB libspdk_ut_mock.a 00:02:43.143 SO libspdk_ut_mock.so.6.0 00:02:43.143 SO libspdk_ut.so.2.0 00:02:43.143 SO libspdk_log.so.7.1 00:02:43.143 SYMLINK libspdk_ut_mock.so 00:02:43.143 SYMLINK libspdk_ut.so 00:02:43.143 SYMLINK libspdk_log.so 00:02:43.143 CXX lib/trace_parser/trace.o 00:02:43.143 CC lib/ioat/ioat.o 00:02:43.143 CC lib/dma/dma.o 00:02:43.143 CC lib/util/bit_array.o 00:02:43.143 CC lib/util/crc32c.o 00:02:43.143 CC lib/util/cpuset.o 00:02:43.143 CC lib/util/base64.o 00:02:43.143 CC lib/util/crc32.o 00:02:43.143 CC lib/util/crc16.o 00:02:43.143 CC lib/vfio_user/host/vfio_user_pci.o 00:02:43.143 CC lib/util/crc32_ieee.o 00:02:43.143 LIB libspdk_dma.a 00:02:43.143 CC lib/util/crc64.o 00:02:43.143 CC lib/util/dif.o 00:02:43.143 SO libspdk_dma.so.5.0 00:02:43.143 CC lib/util/fd.o 00:02:43.143 CC lib/util/fd_group.o 00:02:43.143 LIB libspdk_ioat.a 00:02:43.143 CC lib/vfio_user/host/vfio_user.o 00:02:43.143 SYMLINK libspdk_dma.so 00:02:43.143 CC lib/util/file.o 00:02:43.143 SO libspdk_ioat.so.7.0 00:02:43.143 CC lib/util/hexlify.o 00:02:43.143 CC lib/util/iov.o 00:02:43.143 SYMLINK libspdk_ioat.so 00:02:43.143 CC lib/util/math.o 00:02:43.143 CC lib/util/net.o 00:02:43.143 CC lib/util/pipe.o 00:02:43.143 CC lib/util/strerror_tls.o 00:02:43.143 CC lib/util/string.o 00:02:43.143 LIB libspdk_vfio_user.a 00:02:43.143 CC lib/util/uuid.o 00:02:43.143 CC lib/util/xor.o 00:02:43.143 SO libspdk_vfio_user.so.5.0 00:02:43.143 CC lib/util/zipf.o 00:02:43.143 CC lib/util/md5.o 00:02:43.143 SYMLINK libspdk_vfio_user.so 00:02:43.143 LIB libspdk_util.a 00:02:43.401 LIB libspdk_trace_parser.a 00:02:43.401 SO libspdk_util.so.10.1 00:02:43.401 SO libspdk_trace_parser.so.6.0 00:02:43.401 SYMLINK libspdk_util.so 00:02:43.401 SYMLINK libspdk_trace_parser.so 00:02:43.660 CC lib/idxd/idxd.o 00:02:43.660 CC lib/idxd/idxd_kernel.o 00:02:43.660 CC lib/idxd/idxd_user.o 00:02:43.660 CC lib/conf/conf.o 00:02:43.660 CC lib/env_dpdk/memory.o 00:02:43.660 CC lib/env_dpdk/env.o 00:02:43.660 CC lib/env_dpdk/pci.o 00:02:43.660 CC lib/vmd/vmd.o 00:02:43.660 CC lib/json/json_parse.o 00:02:43.660 CC lib/rdma_utils/rdma_utils.o 00:02:43.919 CC lib/env_dpdk/init.o 00:02:43.919 LIB libspdk_conf.a 00:02:43.919 CC lib/json/json_util.o 00:02:43.919 CC lib/env_dpdk/threads.o 00:02:43.919 SO libspdk_conf.so.6.0 00:02:43.919 LIB libspdk_rdma_utils.a 00:02:44.178 SYMLINK libspdk_conf.so 00:02:44.178 SO libspdk_rdma_utils.so.1.0 00:02:44.178 CC lib/vmd/led.o 00:02:44.178 CC lib/json/json_write.o 00:02:44.178 SYMLINK libspdk_rdma_utils.so 00:02:44.178 CC lib/env_dpdk/pci_ioat.o 00:02:44.178 CC lib/env_dpdk/pci_virtio.o 00:02:44.178 CC lib/env_dpdk/pci_vmd.o 00:02:44.178 CC lib/env_dpdk/pci_idxd.o 00:02:44.178 CC lib/env_dpdk/pci_event.o 00:02:44.178 CC lib/env_dpdk/sigbus_handler.o 00:02:44.438 CC lib/env_dpdk/pci_dpdk.o 00:02:44.438 LIB libspdk_idxd.a 00:02:44.438 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:44.438 LIB libspdk_json.a 00:02:44.438 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:44.438 CC lib/rdma_provider/common.o 00:02:44.438 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:44.438 SO libspdk_idxd.so.12.1 00:02:44.438 SO libspdk_json.so.6.0 00:02:44.438 LIB libspdk_vmd.a 00:02:44.438 SO libspdk_vmd.so.6.0 00:02:44.438 SYMLINK libspdk_idxd.so 00:02:44.438 SYMLINK libspdk_json.so 00:02:44.438 SYMLINK libspdk_vmd.so 00:02:44.703 LIB libspdk_rdma_provider.a 00:02:44.703 SO libspdk_rdma_provider.so.7.0 00:02:44.703 SYMLINK libspdk_rdma_provider.so 00:02:44.703 CC lib/jsonrpc/jsonrpc_client.o 00:02:44.703 CC lib/jsonrpc/jsonrpc_server.o 00:02:44.703 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:44.703 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:45.279 LIB libspdk_jsonrpc.a 00:02:45.279 SO libspdk_jsonrpc.so.6.0 00:02:45.279 SYMLINK libspdk_jsonrpc.so 00:02:45.279 LIB libspdk_env_dpdk.a 00:02:45.279 SO libspdk_env_dpdk.so.15.1 00:02:45.538 SYMLINK libspdk_env_dpdk.so 00:02:45.538 CC lib/rpc/rpc.o 00:02:45.798 LIB libspdk_rpc.a 00:02:46.057 SO libspdk_rpc.so.6.0 00:02:46.057 SYMLINK libspdk_rpc.so 00:02:46.316 CC lib/keyring/keyring.o 00:02:46.316 CC lib/keyring/keyring_rpc.o 00:02:46.316 CC lib/notify/notify.o 00:02:46.316 CC lib/notify/notify_rpc.o 00:02:46.316 CC lib/trace/trace.o 00:02:46.316 CC lib/trace/trace_flags.o 00:02:46.316 CC lib/trace/trace_rpc.o 00:02:46.575 LIB libspdk_notify.a 00:02:46.575 LIB libspdk_keyring.a 00:02:46.575 SO libspdk_notify.so.6.0 00:02:46.575 SO libspdk_keyring.so.2.0 00:02:46.835 LIB libspdk_trace.a 00:02:46.835 SYMLINK libspdk_notify.so 00:02:46.835 SYMLINK libspdk_keyring.so 00:02:46.835 SO libspdk_trace.so.11.0 00:02:46.835 SYMLINK libspdk_trace.so 00:02:47.404 CC lib/sock/sock.o 00:02:47.404 CC lib/sock/sock_rpc.o 00:02:47.404 CC lib/thread/thread.o 00:02:47.404 CC lib/thread/iobuf.o 00:02:47.663 LIB libspdk_sock.a 00:02:47.663 SO libspdk_sock.so.10.0 00:02:47.924 SYMLINK libspdk_sock.so 00:02:48.183 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:48.183 CC lib/nvme/nvme_ctrlr.o 00:02:48.183 CC lib/nvme/nvme_fabric.o 00:02:48.183 CC lib/nvme/nvme_ns_cmd.o 00:02:48.183 CC lib/nvme/nvme_ns.o 00:02:48.183 CC lib/nvme/nvme_pcie_common.o 00:02:48.183 CC lib/nvme/nvme_pcie.o 00:02:48.183 CC lib/nvme/nvme_qpair.o 00:02:48.183 CC lib/nvme/nvme.o 00:02:49.121 CC lib/nvme/nvme_quirks.o 00:02:49.121 CC lib/nvme/nvme_transport.o 00:02:49.121 CC lib/nvme/nvme_discovery.o 00:02:49.121 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:49.121 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:49.121 CC lib/nvme/nvme_tcp.o 00:02:49.121 LIB libspdk_thread.a 00:02:49.121 CC lib/nvme/nvme_opal.o 00:02:49.121 SO libspdk_thread.so.11.0 00:02:49.382 SYMLINK libspdk_thread.so 00:02:49.382 CC lib/nvme/nvme_io_msg.o 00:02:49.382 CC lib/nvme/nvme_poll_group.o 00:02:49.382 CC lib/nvme/nvme_zns.o 00:02:49.649 CC lib/nvme/nvme_stubs.o 00:02:49.649 CC lib/nvme/nvme_auth.o 00:02:49.649 CC lib/nvme/nvme_cuse.o 00:02:49.923 CC lib/accel/accel.o 00:02:49.923 CC lib/blob/blobstore.o 00:02:49.923 CC lib/blob/request.o 00:02:49.923 CC lib/blob/zeroes.o 00:02:50.183 CC lib/init/json_config.o 00:02:50.183 CC lib/virtio/virtio.o 00:02:50.183 CC lib/blob/blob_bs_dev.o 00:02:50.443 CC lib/fsdev/fsdev.o 00:02:50.443 CC lib/nvme/nvme_rdma.o 00:02:50.443 CC lib/init/subsystem.o 00:02:50.443 CC lib/init/subsystem_rpc.o 00:02:50.443 CC lib/virtio/virtio_vhost_user.o 00:02:50.703 CC lib/virtio/virtio_vfio_user.o 00:02:50.703 CC lib/virtio/virtio_pci.o 00:02:50.703 CC lib/init/rpc.o 00:02:50.703 CC lib/accel/accel_rpc.o 00:02:50.703 CC lib/accel/accel_sw.o 00:02:50.963 CC lib/fsdev/fsdev_io.o 00:02:50.963 CC lib/fsdev/fsdev_rpc.o 00:02:50.963 LIB libspdk_init.a 00:02:50.963 SO libspdk_init.so.6.0 00:02:50.963 SYMLINK libspdk_init.so 00:02:50.963 LIB libspdk_virtio.a 00:02:50.963 SO libspdk_virtio.so.7.0 00:02:51.222 SYMLINK libspdk_virtio.so 00:02:51.222 LIB libspdk_accel.a 00:02:51.222 SO libspdk_accel.so.16.0 00:02:51.222 LIB libspdk_fsdev.a 00:02:51.222 CC lib/event/app.o 00:02:51.222 CC lib/event/reactor.o 00:02:51.222 CC lib/event/log_rpc.o 00:02:51.222 CC lib/event/app_rpc.o 00:02:51.222 CC lib/event/scheduler_static.o 00:02:51.222 SO libspdk_fsdev.so.2.0 00:02:51.222 SYMLINK libspdk_accel.so 00:02:51.481 SYMLINK libspdk_fsdev.so 00:02:51.481 CC lib/bdev/bdev_rpc.o 00:02:51.481 CC lib/bdev/bdev.o 00:02:51.481 CC lib/bdev/part.o 00:02:51.481 CC lib/bdev/bdev_zone.o 00:02:51.740 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:51.740 CC lib/bdev/scsi_nvme.o 00:02:52.000 LIB libspdk_event.a 00:02:52.000 SO libspdk_event.so.14.0 00:02:52.000 SYMLINK libspdk_event.so 00:02:52.000 LIB libspdk_nvme.a 00:02:52.259 LIB libspdk_fuse_dispatcher.a 00:02:52.259 SO libspdk_nvme.so.15.0 00:02:52.259 SO libspdk_fuse_dispatcher.so.1.0 00:02:52.518 SYMLINK libspdk_fuse_dispatcher.so 00:02:52.518 SYMLINK libspdk_nvme.so 00:02:53.465 LIB libspdk_blob.a 00:02:53.465 SO libspdk_blob.so.12.0 00:02:53.725 SYMLINK libspdk_blob.so 00:02:53.985 CC lib/lvol/lvol.o 00:02:54.255 CC lib/blobfs/blobfs.o 00:02:54.255 CC lib/blobfs/tree.o 00:02:54.823 LIB libspdk_bdev.a 00:02:54.823 SO libspdk_bdev.so.17.0 00:02:55.084 SYMLINK libspdk_bdev.so 00:02:55.084 LIB libspdk_blobfs.a 00:02:55.084 SO libspdk_blobfs.so.11.0 00:02:55.084 LIB libspdk_lvol.a 00:02:55.084 SO libspdk_lvol.so.11.0 00:02:55.084 SYMLINK libspdk_blobfs.so 00:02:55.084 CC lib/ublk/ublk_rpc.o 00:02:55.084 CC lib/ublk/ublk.o 00:02:55.084 CC lib/nbd/nbd.o 00:02:55.084 CC lib/nbd/nbd_rpc.o 00:02:55.084 CC lib/nvmf/ctrlr.o 00:02:55.084 CC lib/nvmf/ctrlr_discovery.o 00:02:55.084 CC lib/nvmf/ctrlr_bdev.o 00:02:55.084 CC lib/scsi/dev.o 00:02:55.084 CC lib/ftl/ftl_core.o 00:02:55.341 SYMLINK libspdk_lvol.so 00:02:55.341 CC lib/ftl/ftl_init.o 00:02:55.341 CC lib/scsi/lun.o 00:02:55.341 CC lib/scsi/port.o 00:02:55.341 CC lib/scsi/scsi.o 00:02:55.341 CC lib/ftl/ftl_layout.o 00:02:55.599 CC lib/scsi/scsi_bdev.o 00:02:55.599 CC lib/scsi/scsi_pr.o 00:02:55.599 CC lib/scsi/scsi_rpc.o 00:02:55.599 CC lib/nvmf/subsystem.o 00:02:55.599 LIB libspdk_nbd.a 00:02:55.599 SO libspdk_nbd.so.7.0 00:02:55.599 CC lib/scsi/task.o 00:02:55.858 CC lib/nvmf/nvmf.o 00:02:55.858 SYMLINK libspdk_nbd.so 00:02:55.858 CC lib/nvmf/nvmf_rpc.o 00:02:55.858 CC lib/ftl/ftl_debug.o 00:02:55.858 CC lib/nvmf/transport.o 00:02:55.858 CC lib/ftl/ftl_io.o 00:02:55.858 CC lib/nvmf/tcp.o 00:02:55.858 LIB libspdk_ublk.a 00:02:55.858 SO libspdk_ublk.so.3.0 00:02:56.117 LIB libspdk_scsi.a 00:02:56.118 CC lib/nvmf/stubs.o 00:02:56.118 SYMLINK libspdk_ublk.so 00:02:56.118 CC lib/nvmf/mdns_server.o 00:02:56.118 SO libspdk_scsi.so.9.0 00:02:56.118 CC lib/ftl/ftl_sb.o 00:02:56.118 SYMLINK libspdk_scsi.so 00:02:56.118 CC lib/ftl/ftl_l2p.o 00:02:56.377 CC lib/ftl/ftl_l2p_flat.o 00:02:56.377 CC lib/ftl/ftl_nv_cache.o 00:02:56.650 CC lib/nvmf/rdma.o 00:02:56.650 CC lib/nvmf/auth.o 00:02:56.650 CC lib/ftl/ftl_band.o 00:02:56.650 CC lib/ftl/ftl_band_ops.o 00:02:56.650 CC lib/iscsi/conn.o 00:02:56.650 CC lib/vhost/vhost.o 00:02:56.909 CC lib/vhost/vhost_rpc.o 00:02:56.909 CC lib/vhost/vhost_scsi.o 00:02:56.910 CC lib/iscsi/init_grp.o 00:02:57.169 CC lib/iscsi/iscsi.o 00:02:57.429 CC lib/iscsi/param.o 00:02:57.429 CC lib/vhost/vhost_blk.o 00:02:57.429 CC lib/iscsi/portal_grp.o 00:02:57.688 CC lib/iscsi/tgt_node.o 00:02:57.688 CC lib/iscsi/iscsi_subsystem.o 00:02:57.688 CC lib/vhost/rte_vhost_user.o 00:02:57.688 CC lib/ftl/ftl_writer.o 00:02:57.688 CC lib/iscsi/iscsi_rpc.o 00:02:57.688 CC lib/iscsi/task.o 00:02:57.949 CC lib/ftl/ftl_rq.o 00:02:57.949 CC lib/ftl/ftl_reloc.o 00:02:57.949 CC lib/ftl/ftl_l2p_cache.o 00:02:58.208 CC lib/ftl/ftl_p2l.o 00:02:58.208 CC lib/ftl/ftl_p2l_log.o 00:02:58.208 CC lib/ftl/mngt/ftl_mngt.o 00:02:58.208 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:58.208 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:58.468 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:58.468 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:58.468 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:58.468 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:58.468 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:58.468 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:58.728 LIB libspdk_iscsi.a 00:02:58.728 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:58.728 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:58.728 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:58.728 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:58.728 SO libspdk_iscsi.so.8.0 00:02:58.728 LIB libspdk_vhost.a 00:02:58.728 CC lib/ftl/utils/ftl_conf.o 00:02:58.728 CC lib/ftl/utils/ftl_md.o 00:02:58.988 SO libspdk_vhost.so.8.0 00:02:58.988 CC lib/ftl/utils/ftl_mempool.o 00:02:58.988 CC lib/ftl/utils/ftl_bitmap.o 00:02:58.988 CC lib/ftl/utils/ftl_property.o 00:02:58.988 SYMLINK libspdk_iscsi.so 00:02:58.988 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:58.988 SYMLINK libspdk_vhost.so 00:02:58.988 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:58.988 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:58.988 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:58.988 LIB libspdk_nvmf.a 00:02:59.246 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:59.246 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:59.246 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:59.246 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:59.246 SO libspdk_nvmf.so.20.0 00:02:59.246 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:59.246 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:59.246 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:59.246 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:59.504 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:59.504 CC lib/ftl/base/ftl_base_dev.o 00:02:59.504 CC lib/ftl/base/ftl_base_bdev.o 00:02:59.504 CC lib/ftl/ftl_trace.o 00:02:59.504 SYMLINK libspdk_nvmf.so 00:02:59.763 LIB libspdk_ftl.a 00:03:00.022 SO libspdk_ftl.so.9.0 00:03:00.281 SYMLINK libspdk_ftl.so 00:03:00.849 CC module/env_dpdk/env_dpdk_rpc.o 00:03:00.849 CC module/sock/posix/posix.o 00:03:00.849 CC module/accel/dsa/accel_dsa.o 00:03:00.849 CC module/fsdev/aio/fsdev_aio.o 00:03:00.849 CC module/accel/iaa/accel_iaa.o 00:03:00.849 CC module/accel/error/accel_error.o 00:03:00.849 CC module/keyring/file/keyring.o 00:03:00.849 CC module/accel/ioat/accel_ioat.o 00:03:00.849 CC module/blob/bdev/blob_bdev.o 00:03:00.849 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:00.849 LIB libspdk_env_dpdk_rpc.a 00:03:00.849 SO libspdk_env_dpdk_rpc.so.6.0 00:03:00.849 SYMLINK libspdk_env_dpdk_rpc.so 00:03:00.849 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:01.108 CC module/keyring/file/keyring_rpc.o 00:03:01.108 CC module/accel/ioat/accel_ioat_rpc.o 00:03:01.108 CC module/accel/iaa/accel_iaa_rpc.o 00:03:01.108 CC module/accel/error/accel_error_rpc.o 00:03:01.108 LIB libspdk_scheduler_dynamic.a 00:03:01.108 SO libspdk_scheduler_dynamic.so.4.0 00:03:01.108 LIB libspdk_keyring_file.a 00:03:01.108 SYMLINK libspdk_scheduler_dynamic.so 00:03:01.108 CC module/fsdev/aio/linux_aio_mgr.o 00:03:01.108 LIB libspdk_blob_bdev.a 00:03:01.108 SO libspdk_keyring_file.so.2.0 00:03:01.108 CC module/accel/dsa/accel_dsa_rpc.o 00:03:01.108 LIB libspdk_accel_ioat.a 00:03:01.108 SO libspdk_blob_bdev.so.12.0 00:03:01.108 LIB libspdk_accel_iaa.a 00:03:01.108 LIB libspdk_accel_error.a 00:03:01.108 SO libspdk_accel_ioat.so.6.0 00:03:01.108 SO libspdk_accel_iaa.so.3.0 00:03:01.108 SYMLINK libspdk_keyring_file.so 00:03:01.108 SO libspdk_accel_error.so.2.0 00:03:01.367 SYMLINK libspdk_blob_bdev.so 00:03:01.367 SYMLINK libspdk_accel_ioat.so 00:03:01.367 SYMLINK libspdk_accel_error.so 00:03:01.367 SYMLINK libspdk_accel_iaa.so 00:03:01.367 LIB libspdk_accel_dsa.a 00:03:01.367 SO libspdk_accel_dsa.so.5.0 00:03:01.367 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:01.367 SYMLINK libspdk_accel_dsa.so 00:03:01.367 CC module/keyring/linux/keyring.o 00:03:01.367 CC module/scheduler/gscheduler/gscheduler.o 00:03:01.625 LIB libspdk_scheduler_dpdk_governor.a 00:03:01.625 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:01.625 CC module/bdev/error/vbdev_error.o 00:03:01.625 CC module/bdev/delay/vbdev_delay.o 00:03:01.625 CC module/bdev/gpt/gpt.o 00:03:01.625 CC module/blobfs/bdev/blobfs_bdev.o 00:03:01.625 CC module/bdev/lvol/vbdev_lvol.o 00:03:01.625 CC module/keyring/linux/keyring_rpc.o 00:03:01.625 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:01.625 LIB libspdk_fsdev_aio.a 00:03:01.625 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:01.625 LIB libspdk_scheduler_gscheduler.a 00:03:01.625 SO libspdk_fsdev_aio.so.1.0 00:03:01.625 SO libspdk_scheduler_gscheduler.so.4.0 00:03:01.625 LIB libspdk_sock_posix.a 00:03:01.625 SO libspdk_sock_posix.so.6.0 00:03:01.625 SYMLINK libspdk_scheduler_gscheduler.so 00:03:01.625 LIB libspdk_keyring_linux.a 00:03:01.625 SYMLINK libspdk_fsdev_aio.so 00:03:01.883 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:01.883 CC module/bdev/error/vbdev_error_rpc.o 00:03:01.883 CC module/bdev/gpt/vbdev_gpt.o 00:03:01.883 SO libspdk_keyring_linux.so.1.0 00:03:01.883 SYMLINK libspdk_sock_posix.so 00:03:01.883 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:01.883 SYMLINK libspdk_keyring_linux.so 00:03:01.883 CC module/bdev/malloc/bdev_malloc.o 00:03:01.883 LIB libspdk_blobfs_bdev.a 00:03:01.883 LIB libspdk_bdev_error.a 00:03:01.884 SO libspdk_blobfs_bdev.so.6.0 00:03:01.884 SO libspdk_bdev_error.so.6.0 00:03:01.884 LIB libspdk_bdev_delay.a 00:03:02.142 CC module/bdev/null/bdev_null.o 00:03:02.142 CC module/bdev/nvme/bdev_nvme.o 00:03:02.142 SYMLINK libspdk_blobfs_bdev.so 00:03:02.142 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:02.142 SO libspdk_bdev_delay.so.6.0 00:03:02.142 SYMLINK libspdk_bdev_error.so 00:03:02.142 LIB libspdk_bdev_gpt.a 00:03:02.142 SO libspdk_bdev_gpt.so.6.0 00:03:02.142 SYMLINK libspdk_bdev_delay.so 00:03:02.142 CC module/bdev/nvme/nvme_rpc.o 00:03:02.142 CC module/bdev/passthru/vbdev_passthru.o 00:03:02.142 SYMLINK libspdk_bdev_gpt.so 00:03:02.142 LIB libspdk_bdev_lvol.a 00:03:02.142 CC module/bdev/raid/bdev_raid.o 00:03:02.142 SO libspdk_bdev_lvol.so.6.0 00:03:02.402 CC module/bdev/split/vbdev_split.o 00:03:02.402 CC module/bdev/null/bdev_null_rpc.o 00:03:02.402 SYMLINK libspdk_bdev_lvol.so 00:03:02.402 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:02.402 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:02.402 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:02.402 CC module/bdev/split/vbdev_split_rpc.o 00:03:02.402 CC module/bdev/aio/bdev_aio.o 00:03:02.402 LIB libspdk_bdev_null.a 00:03:02.661 LIB libspdk_bdev_malloc.a 00:03:02.661 CC module/bdev/ftl/bdev_ftl.o 00:03:02.661 SO libspdk_bdev_null.so.6.0 00:03:02.661 SO libspdk_bdev_malloc.so.6.0 00:03:02.661 SYMLINK libspdk_bdev_null.so 00:03:02.661 SYMLINK libspdk_bdev_malloc.so 00:03:02.661 CC module/bdev/nvme/bdev_mdns_client.o 00:03:02.661 LIB libspdk_bdev_split.a 00:03:02.661 LIB libspdk_bdev_passthru.a 00:03:02.661 SO libspdk_bdev_split.so.6.0 00:03:02.661 SO libspdk_bdev_passthru.so.6.0 00:03:02.661 SYMLINK libspdk_bdev_split.so 00:03:02.920 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:02.920 SYMLINK libspdk_bdev_passthru.so 00:03:02.920 CC module/bdev/iscsi/bdev_iscsi.o 00:03:02.920 CC module/bdev/raid/bdev_raid_rpc.o 00:03:02.920 CC module/bdev/raid/bdev_raid_sb.o 00:03:02.920 CC module/bdev/raid/raid0.o 00:03:02.920 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:02.920 CC module/bdev/aio/bdev_aio_rpc.o 00:03:02.920 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:02.920 LIB libspdk_bdev_zone_block.a 00:03:02.920 SO libspdk_bdev_zone_block.so.6.0 00:03:03.180 LIB libspdk_bdev_aio.a 00:03:03.180 CC module/bdev/raid/raid1.o 00:03:03.180 SYMLINK libspdk_bdev_zone_block.so 00:03:03.180 CC module/bdev/raid/concat.o 00:03:03.180 SO libspdk_bdev_aio.so.6.0 00:03:03.180 CC module/bdev/raid/raid5f.o 00:03:03.180 LIB libspdk_bdev_ftl.a 00:03:03.180 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:03.180 SYMLINK libspdk_bdev_aio.so 00:03:03.180 CC module/bdev/nvme/vbdev_opal.o 00:03:03.180 SO libspdk_bdev_ftl.so.6.0 00:03:03.180 SYMLINK libspdk_bdev_ftl.so 00:03:03.180 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:03.180 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:03.180 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:03.439 LIB libspdk_bdev_iscsi.a 00:03:03.439 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:03.439 SO libspdk_bdev_iscsi.so.6.0 00:03:03.439 SYMLINK libspdk_bdev_iscsi.so 00:03:03.439 LIB libspdk_bdev_virtio.a 00:03:03.698 SO libspdk_bdev_virtio.so.6.0 00:03:03.698 LIB libspdk_bdev_raid.a 00:03:03.698 SYMLINK libspdk_bdev_virtio.so 00:03:03.698 SO libspdk_bdev_raid.so.6.0 00:03:03.958 SYMLINK libspdk_bdev_raid.so 00:03:05.348 LIB libspdk_bdev_nvme.a 00:03:05.348 SO libspdk_bdev_nvme.so.7.1 00:03:05.607 SYMLINK libspdk_bdev_nvme.so 00:03:06.180 CC module/event/subsystems/scheduler/scheduler.o 00:03:06.180 CC module/event/subsystems/iobuf/iobuf.o 00:03:06.180 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:06.180 CC module/event/subsystems/sock/sock.o 00:03:06.180 CC module/event/subsystems/fsdev/fsdev.o 00:03:06.180 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:06.180 CC module/event/subsystems/keyring/keyring.o 00:03:06.180 CC module/event/subsystems/vmd/vmd.o 00:03:06.180 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:06.438 LIB libspdk_event_scheduler.a 00:03:06.438 LIB libspdk_event_fsdev.a 00:03:06.438 LIB libspdk_event_keyring.a 00:03:06.438 LIB libspdk_event_sock.a 00:03:06.438 LIB libspdk_event_vhost_blk.a 00:03:06.438 SO libspdk_event_scheduler.so.4.0 00:03:06.438 SO libspdk_event_fsdev.so.1.0 00:03:06.438 LIB libspdk_event_iobuf.a 00:03:06.438 SO libspdk_event_keyring.so.1.0 00:03:06.438 SO libspdk_event_sock.so.5.0 00:03:06.438 SO libspdk_event_vhost_blk.so.3.0 00:03:06.438 LIB libspdk_event_vmd.a 00:03:06.438 SO libspdk_event_iobuf.so.3.0 00:03:06.438 SYMLINK libspdk_event_fsdev.so 00:03:06.438 SYMLINK libspdk_event_scheduler.so 00:03:06.438 SYMLINK libspdk_event_keyring.so 00:03:06.438 SO libspdk_event_vmd.so.6.0 00:03:06.438 SYMLINK libspdk_event_sock.so 00:03:06.438 SYMLINK libspdk_event_vhost_blk.so 00:03:06.438 SYMLINK libspdk_event_iobuf.so 00:03:06.438 SYMLINK libspdk_event_vmd.so 00:03:07.006 CC module/event/subsystems/accel/accel.o 00:03:07.006 LIB libspdk_event_accel.a 00:03:07.264 SO libspdk_event_accel.so.6.0 00:03:07.264 SYMLINK libspdk_event_accel.so 00:03:07.523 CC module/event/subsystems/bdev/bdev.o 00:03:07.782 LIB libspdk_event_bdev.a 00:03:07.782 SO libspdk_event_bdev.so.6.0 00:03:08.040 SYMLINK libspdk_event_bdev.so 00:03:08.300 CC module/event/subsystems/nbd/nbd.o 00:03:08.300 CC module/event/subsystems/ublk/ublk.o 00:03:08.300 CC module/event/subsystems/scsi/scsi.o 00:03:08.300 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:08.300 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:08.560 LIB libspdk_event_nbd.a 00:03:08.560 LIB libspdk_event_scsi.a 00:03:08.560 SO libspdk_event_nbd.so.6.0 00:03:08.560 LIB libspdk_event_ublk.a 00:03:08.560 SO libspdk_event_scsi.so.6.0 00:03:08.560 SO libspdk_event_ublk.so.3.0 00:03:08.560 SYMLINK libspdk_event_nbd.so 00:03:08.560 SYMLINK libspdk_event_scsi.so 00:03:08.560 LIB libspdk_event_nvmf.a 00:03:08.560 SYMLINK libspdk_event_ublk.so 00:03:08.560 SO libspdk_event_nvmf.so.6.0 00:03:08.819 SYMLINK libspdk_event_nvmf.so 00:03:09.079 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:09.079 CC module/event/subsystems/iscsi/iscsi.o 00:03:09.079 LIB libspdk_event_vhost_scsi.a 00:03:09.079 SO libspdk_event_vhost_scsi.so.3.0 00:03:09.079 LIB libspdk_event_iscsi.a 00:03:09.079 SO libspdk_event_iscsi.so.6.0 00:03:09.338 SYMLINK libspdk_event_vhost_scsi.so 00:03:09.338 SYMLINK libspdk_event_iscsi.so 00:03:09.597 SO libspdk.so.6.0 00:03:09.597 SYMLINK libspdk.so 00:03:09.857 CXX app/trace/trace.o 00:03:09.857 CC app/spdk_lspci/spdk_lspci.o 00:03:09.857 CC app/trace_record/trace_record.o 00:03:09.857 CC app/spdk_nvme_perf/perf.o 00:03:09.857 CC app/nvmf_tgt/nvmf_main.o 00:03:09.857 CC app/iscsi_tgt/iscsi_tgt.o 00:03:09.857 CC app/spdk_tgt/spdk_tgt.o 00:03:09.857 CC examples/util/zipf/zipf.o 00:03:09.857 CC examples/ioat/perf/perf.o 00:03:09.857 CC test/thread/poller_perf/poller_perf.o 00:03:09.857 LINK spdk_lspci 00:03:10.116 LINK nvmf_tgt 00:03:10.116 LINK zipf 00:03:10.116 LINK iscsi_tgt 00:03:10.116 LINK spdk_tgt 00:03:10.116 LINK spdk_trace_record 00:03:10.116 LINK ioat_perf 00:03:10.116 LINK poller_perf 00:03:10.116 CC app/spdk_nvme_identify/identify.o 00:03:10.116 LINK spdk_trace 00:03:10.375 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:10.375 CC app/spdk_nvme_discover/discovery_aer.o 00:03:10.375 CC app/spdk_top/spdk_top.o 00:03:10.375 CC examples/ioat/verify/verify.o 00:03:10.375 CC app/spdk_dd/spdk_dd.o 00:03:10.375 CC test/dma/test_dma/test_dma.o 00:03:10.375 CC app/fio/nvme/fio_plugin.o 00:03:10.375 LINK interrupt_tgt 00:03:10.634 LINK spdk_nvme_discover 00:03:10.634 LINK verify 00:03:10.634 CC test/app/bdev_svc/bdev_svc.o 00:03:10.634 LINK spdk_nvme_perf 00:03:10.892 LINK spdk_dd 00:03:10.892 LINK bdev_svc 00:03:10.892 CC app/vhost/vhost.o 00:03:10.892 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:10.892 CC examples/thread/thread/thread_ex.o 00:03:10.892 LINK test_dma 00:03:11.152 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:11.152 LINK vhost 00:03:11.152 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:11.152 LINK spdk_nvme 00:03:11.152 CC examples/sock/hello_world/hello_sock.o 00:03:11.152 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:11.152 LINK spdk_nvme_identify 00:03:11.152 LINK thread 00:03:11.152 TEST_HEADER include/spdk/accel.h 00:03:11.152 TEST_HEADER include/spdk/accel_module.h 00:03:11.411 TEST_HEADER include/spdk/assert.h 00:03:11.411 TEST_HEADER include/spdk/barrier.h 00:03:11.411 TEST_HEADER include/spdk/base64.h 00:03:11.411 TEST_HEADER include/spdk/bdev.h 00:03:11.411 TEST_HEADER include/spdk/bdev_module.h 00:03:11.411 TEST_HEADER include/spdk/bdev_zone.h 00:03:11.411 TEST_HEADER include/spdk/bit_array.h 00:03:11.411 TEST_HEADER include/spdk/bit_pool.h 00:03:11.411 TEST_HEADER include/spdk/blob_bdev.h 00:03:11.411 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:11.411 TEST_HEADER include/spdk/blobfs.h 00:03:11.411 TEST_HEADER include/spdk/blob.h 00:03:11.411 TEST_HEADER include/spdk/conf.h 00:03:11.411 TEST_HEADER include/spdk/config.h 00:03:11.411 TEST_HEADER include/spdk/cpuset.h 00:03:11.411 TEST_HEADER include/spdk/crc16.h 00:03:11.411 TEST_HEADER include/spdk/crc32.h 00:03:11.411 TEST_HEADER include/spdk/crc64.h 00:03:11.411 TEST_HEADER include/spdk/dif.h 00:03:11.411 TEST_HEADER include/spdk/dma.h 00:03:11.411 TEST_HEADER include/spdk/endian.h 00:03:11.411 TEST_HEADER include/spdk/env_dpdk.h 00:03:11.411 TEST_HEADER include/spdk/env.h 00:03:11.411 TEST_HEADER include/spdk/event.h 00:03:11.411 TEST_HEADER include/spdk/fd_group.h 00:03:11.411 TEST_HEADER include/spdk/fd.h 00:03:11.411 TEST_HEADER include/spdk/file.h 00:03:11.411 TEST_HEADER include/spdk/fsdev.h 00:03:11.411 TEST_HEADER include/spdk/fsdev_module.h 00:03:11.411 TEST_HEADER include/spdk/ftl.h 00:03:11.411 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:11.411 TEST_HEADER include/spdk/gpt_spec.h 00:03:11.411 TEST_HEADER include/spdk/hexlify.h 00:03:11.411 TEST_HEADER include/spdk/histogram_data.h 00:03:11.411 TEST_HEADER include/spdk/idxd.h 00:03:11.411 TEST_HEADER include/spdk/idxd_spec.h 00:03:11.411 TEST_HEADER include/spdk/init.h 00:03:11.411 TEST_HEADER include/spdk/ioat.h 00:03:11.411 TEST_HEADER include/spdk/ioat_spec.h 00:03:11.411 TEST_HEADER include/spdk/iscsi_spec.h 00:03:11.411 TEST_HEADER include/spdk/json.h 00:03:11.411 TEST_HEADER include/spdk/jsonrpc.h 00:03:11.411 TEST_HEADER include/spdk/keyring.h 00:03:11.411 CC app/fio/bdev/fio_plugin.o 00:03:11.411 TEST_HEADER include/spdk/keyring_module.h 00:03:11.411 TEST_HEADER include/spdk/likely.h 00:03:11.411 TEST_HEADER include/spdk/log.h 00:03:11.411 TEST_HEADER include/spdk/lvol.h 00:03:11.411 TEST_HEADER include/spdk/md5.h 00:03:11.411 TEST_HEADER include/spdk/memory.h 00:03:11.411 TEST_HEADER include/spdk/mmio.h 00:03:11.411 TEST_HEADER include/spdk/nbd.h 00:03:11.411 TEST_HEADER include/spdk/net.h 00:03:11.411 TEST_HEADER include/spdk/notify.h 00:03:11.411 TEST_HEADER include/spdk/nvme.h 00:03:11.411 TEST_HEADER include/spdk/nvme_intel.h 00:03:11.411 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:11.411 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:11.411 TEST_HEADER include/spdk/nvme_spec.h 00:03:11.411 LINK nvme_fuzz 00:03:11.411 TEST_HEADER include/spdk/nvme_zns.h 00:03:11.411 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:11.411 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:11.411 TEST_HEADER include/spdk/nvmf.h 00:03:11.411 TEST_HEADER include/spdk/nvmf_spec.h 00:03:11.411 TEST_HEADER include/spdk/nvmf_transport.h 00:03:11.411 LINK spdk_top 00:03:11.411 TEST_HEADER include/spdk/opal.h 00:03:11.411 TEST_HEADER include/spdk/opal_spec.h 00:03:11.411 TEST_HEADER include/spdk/pci_ids.h 00:03:11.411 TEST_HEADER include/spdk/pipe.h 00:03:11.411 TEST_HEADER include/spdk/queue.h 00:03:11.411 TEST_HEADER include/spdk/reduce.h 00:03:11.411 TEST_HEADER include/spdk/rpc.h 00:03:11.411 TEST_HEADER include/spdk/scheduler.h 00:03:11.411 TEST_HEADER include/spdk/scsi.h 00:03:11.411 TEST_HEADER include/spdk/scsi_spec.h 00:03:11.411 TEST_HEADER include/spdk/sock.h 00:03:11.411 TEST_HEADER include/spdk/stdinc.h 00:03:11.411 TEST_HEADER include/spdk/string.h 00:03:11.411 TEST_HEADER include/spdk/thread.h 00:03:11.411 TEST_HEADER include/spdk/trace.h 00:03:11.411 TEST_HEADER include/spdk/trace_parser.h 00:03:11.411 TEST_HEADER include/spdk/tree.h 00:03:11.411 TEST_HEADER include/spdk/ublk.h 00:03:11.411 TEST_HEADER include/spdk/util.h 00:03:11.411 TEST_HEADER include/spdk/uuid.h 00:03:11.411 TEST_HEADER include/spdk/version.h 00:03:11.411 LINK hello_sock 00:03:11.411 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:11.411 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:11.411 TEST_HEADER include/spdk/vhost.h 00:03:11.411 TEST_HEADER include/spdk/vmd.h 00:03:11.411 TEST_HEADER include/spdk/xor.h 00:03:11.411 TEST_HEADER include/spdk/zipf.h 00:03:11.411 CXX test/cpp_headers/accel.o 00:03:11.411 CC test/env/mem_callbacks/mem_callbacks.o 00:03:11.670 CC test/event/event_perf/event_perf.o 00:03:11.671 CC test/nvme/aer/aer.o 00:03:11.671 CXX test/cpp_headers/accel_module.o 00:03:11.671 CC test/nvme/reset/reset.o 00:03:11.671 CC test/nvme/sgl/sgl.o 00:03:11.671 LINK vhost_fuzz 00:03:11.671 LINK event_perf 00:03:11.671 CC examples/vmd/lsvmd/lsvmd.o 00:03:11.671 CXX test/cpp_headers/assert.o 00:03:11.929 LINK spdk_bdev 00:03:11.929 CC test/rpc_client/rpc_client_test.o 00:03:11.929 LINK lsvmd 00:03:11.929 LINK aer 00:03:11.929 LINK reset 00:03:11.929 LINK sgl 00:03:11.929 CXX test/cpp_headers/barrier.o 00:03:11.929 CC test/event/reactor/reactor.o 00:03:12.188 LINK rpc_client_test 00:03:12.188 LINK mem_callbacks 00:03:12.188 CXX test/cpp_headers/base64.o 00:03:12.188 CC test/event/reactor_perf/reactor_perf.o 00:03:12.188 LINK reactor 00:03:12.188 CC test/nvme/e2edp/nvme_dp.o 00:03:12.188 CC test/event/app_repeat/app_repeat.o 00:03:12.188 CC examples/vmd/led/led.o 00:03:12.188 CC test/event/scheduler/scheduler.o 00:03:12.188 LINK reactor_perf 00:03:12.188 CXX test/cpp_headers/bdev.o 00:03:12.188 CC test/env/vtophys/vtophys.o 00:03:12.188 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:12.447 LINK led 00:03:12.447 LINK app_repeat 00:03:12.447 CXX test/cpp_headers/bdev_module.o 00:03:12.448 CC examples/idxd/perf/perf.o 00:03:12.448 LINK scheduler 00:03:12.448 LINK nvme_dp 00:03:12.448 LINK vtophys 00:03:12.448 CC test/env/memory/memory_ut.o 00:03:12.448 LINK env_dpdk_post_init 00:03:12.706 CC test/env/pci/pci_ut.o 00:03:12.706 CXX test/cpp_headers/bdev_zone.o 00:03:12.706 CXX test/cpp_headers/bit_array.o 00:03:12.706 CC test/nvme/overhead/overhead.o 00:03:12.706 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:12.707 CC test/nvme/err_injection/err_injection.o 00:03:12.707 LINK idxd_perf 00:03:12.707 CC examples/accel/perf/accel_perf.o 00:03:12.707 CXX test/cpp_headers/bit_pool.o 00:03:12.991 CC test/nvme/startup/startup.o 00:03:12.991 LINK err_injection 00:03:12.991 CXX test/cpp_headers/blob_bdev.o 00:03:12.991 LINK iscsi_fuzz 00:03:12.991 LINK hello_fsdev 00:03:12.991 LINK overhead 00:03:12.991 CC test/nvme/reserve/reserve.o 00:03:12.991 LINK pci_ut 00:03:12.991 LINK startup 00:03:12.991 CXX test/cpp_headers/blobfs_bdev.o 00:03:13.272 LINK reserve 00:03:13.272 CXX test/cpp_headers/blobfs.o 00:03:13.272 CC test/nvme/simple_copy/simple_copy.o 00:03:13.272 CC test/app/histogram_perf/histogram_perf.o 00:03:13.272 CC examples/blob/hello_world/hello_blob.o 00:03:13.272 CC examples/nvme/hello_world/hello_world.o 00:03:13.272 CC test/nvme/connect_stress/connect_stress.o 00:03:13.272 CC examples/blob/cli/blobcli.o 00:03:13.272 LINK accel_perf 00:03:13.272 CXX test/cpp_headers/blob.o 00:03:13.794 LINK histogram_perf 00:03:13.794 CC test/app/jsoncat/jsoncat.o 00:03:13.794 LINK simple_copy 00:03:13.794 LINK connect_stress 00:03:13.794 LINK hello_blob 00:03:13.794 LINK hello_world 00:03:13.794 CXX test/cpp_headers/conf.o 00:03:13.794 LINK jsoncat 00:03:13.794 CC test/nvme/boot_partition/boot_partition.o 00:03:13.794 CC test/nvme/compliance/nvme_compliance.o 00:03:13.794 LINK memory_ut 00:03:13.794 CXX test/cpp_headers/config.o 00:03:13.794 CXX test/cpp_headers/cpuset.o 00:03:13.794 LINK boot_partition 00:03:13.794 CC examples/nvme/reconnect/reconnect.o 00:03:13.794 CC test/nvme/fused_ordering/fused_ordering.o 00:03:13.794 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:13.794 CC test/nvme/fdp/fdp.o 00:03:13.794 CC test/app/stub/stub.o 00:03:13.794 LINK blobcli 00:03:13.794 CXX test/cpp_headers/crc16.o 00:03:14.052 LINK fused_ordering 00:03:14.052 CC test/nvme/cuse/cuse.o 00:03:14.052 LINK stub 00:03:14.052 LINK nvme_compliance 00:03:14.052 LINK doorbell_aers 00:03:14.052 CC test/accel/dif/dif.o 00:03:14.052 CXX test/cpp_headers/crc32.o 00:03:14.052 LINK reconnect 00:03:14.052 LINK fdp 00:03:14.052 CXX test/cpp_headers/crc64.o 00:03:14.311 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:14.311 CC examples/nvme/arbitration/arbitration.o 00:03:14.311 CC examples/nvme/hotplug/hotplug.o 00:03:14.311 CC examples/bdev/hello_world/hello_bdev.o 00:03:14.311 CXX test/cpp_headers/dif.o 00:03:14.311 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:14.311 CC examples/bdev/bdevperf/bdevperf.o 00:03:14.311 CC examples/nvme/abort/abort.o 00:03:14.571 CXX test/cpp_headers/dma.o 00:03:14.571 LINK hotplug 00:03:14.571 LINK hello_bdev 00:03:14.571 LINK cmb_copy 00:03:14.571 LINK arbitration 00:03:14.571 CXX test/cpp_headers/endian.o 00:03:14.571 CXX test/cpp_headers/env_dpdk.o 00:03:14.830 CXX test/cpp_headers/env.o 00:03:14.830 LINK abort 00:03:14.830 LINK nvme_manage 00:03:14.830 CXX test/cpp_headers/event.o 00:03:14.830 CXX test/cpp_headers/fd_group.o 00:03:14.830 LINK dif 00:03:14.830 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:14.830 CXX test/cpp_headers/fd.o 00:03:14.830 CXX test/cpp_headers/file.o 00:03:14.830 CXX test/cpp_headers/fsdev.o 00:03:14.830 CXX test/cpp_headers/fsdev_module.o 00:03:15.089 CC test/blobfs/mkfs/mkfs.o 00:03:15.089 CXX test/cpp_headers/ftl.o 00:03:15.089 LINK pmr_persistence 00:03:15.089 CXX test/cpp_headers/fuse_dispatcher.o 00:03:15.089 CXX test/cpp_headers/gpt_spec.o 00:03:15.089 CC test/lvol/esnap/esnap.o 00:03:15.089 CXX test/cpp_headers/hexlify.o 00:03:15.089 CXX test/cpp_headers/histogram_data.o 00:03:15.089 LINK mkfs 00:03:15.089 CXX test/cpp_headers/idxd.o 00:03:15.348 CXX test/cpp_headers/idxd_spec.o 00:03:15.348 LINK bdevperf 00:03:15.348 CC test/bdev/bdevio/bdevio.o 00:03:15.348 CXX test/cpp_headers/init.o 00:03:15.348 CXX test/cpp_headers/ioat.o 00:03:15.348 CXX test/cpp_headers/ioat_spec.o 00:03:15.348 CXX test/cpp_headers/iscsi_spec.o 00:03:15.348 CXX test/cpp_headers/json.o 00:03:15.348 CXX test/cpp_headers/jsonrpc.o 00:03:15.348 LINK cuse 00:03:15.348 CXX test/cpp_headers/keyring.o 00:03:15.348 CXX test/cpp_headers/keyring_module.o 00:03:15.607 CXX test/cpp_headers/likely.o 00:03:15.607 CXX test/cpp_headers/log.o 00:03:15.607 CXX test/cpp_headers/lvol.o 00:03:15.607 CXX test/cpp_headers/md5.o 00:03:15.607 CXX test/cpp_headers/memory.o 00:03:15.607 CXX test/cpp_headers/mmio.o 00:03:15.607 CXX test/cpp_headers/nbd.o 00:03:15.607 CXX test/cpp_headers/net.o 00:03:15.607 LINK bdevio 00:03:15.607 CXX test/cpp_headers/notify.o 00:03:15.607 CXX test/cpp_headers/nvme.o 00:03:15.607 CXX test/cpp_headers/nvme_intel.o 00:03:15.865 CXX test/cpp_headers/nvme_ocssd.o 00:03:15.865 CC examples/nvmf/nvmf/nvmf.o 00:03:15.865 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:15.865 CXX test/cpp_headers/nvme_spec.o 00:03:15.865 CXX test/cpp_headers/nvme_zns.o 00:03:15.865 CXX test/cpp_headers/nvmf_cmd.o 00:03:15.865 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:15.865 CXX test/cpp_headers/nvmf.o 00:03:15.865 CXX test/cpp_headers/nvmf_spec.o 00:03:15.865 CXX test/cpp_headers/nvmf_transport.o 00:03:15.865 CXX test/cpp_headers/opal.o 00:03:16.124 CXX test/cpp_headers/opal_spec.o 00:03:16.124 CXX test/cpp_headers/pci_ids.o 00:03:16.124 CXX test/cpp_headers/pipe.o 00:03:16.124 CXX test/cpp_headers/queue.o 00:03:16.124 CXX test/cpp_headers/reduce.o 00:03:16.124 CXX test/cpp_headers/rpc.o 00:03:16.124 CXX test/cpp_headers/scheduler.o 00:03:16.124 LINK nvmf 00:03:16.124 CXX test/cpp_headers/scsi.o 00:03:16.124 CXX test/cpp_headers/scsi_spec.o 00:03:16.124 CXX test/cpp_headers/sock.o 00:03:16.124 CXX test/cpp_headers/stdinc.o 00:03:16.124 CXX test/cpp_headers/string.o 00:03:16.382 CXX test/cpp_headers/thread.o 00:03:16.382 CXX test/cpp_headers/trace.o 00:03:16.382 CXX test/cpp_headers/trace_parser.o 00:03:16.382 CXX test/cpp_headers/tree.o 00:03:16.382 CXX test/cpp_headers/ublk.o 00:03:16.382 CXX test/cpp_headers/util.o 00:03:16.382 CXX test/cpp_headers/uuid.o 00:03:16.382 CXX test/cpp_headers/version.o 00:03:16.382 CXX test/cpp_headers/vfio_user_pci.o 00:03:16.382 CXX test/cpp_headers/vfio_user_spec.o 00:03:16.382 CXX test/cpp_headers/vhost.o 00:03:16.382 CXX test/cpp_headers/vmd.o 00:03:16.382 CXX test/cpp_headers/xor.o 00:03:16.382 CXX test/cpp_headers/zipf.o 00:03:21.690 LINK esnap 00:03:21.950 00:03:21.950 real 1m28.292s 00:03:21.950 user 7m31.302s 00:03:21.950 sys 1m41.285s 00:03:21.950 14:35:59 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:21.950 14:35:59 make -- common/autotest_common.sh@10 -- $ set +x 00:03:21.950 ************************************ 00:03:21.950 END TEST make 00:03:21.950 ************************************ 00:03:21.950 14:36:00 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:21.950 14:36:00 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:21.950 14:36:00 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:21.950 14:36:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.950 14:36:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:21.950 14:36:00 -- pm/common@44 -- $ pid=5466 00:03:21.950 14:36:00 -- pm/common@50 -- $ kill -TERM 5466 00:03:21.950 14:36:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.950 14:36:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:21.950 14:36:00 -- pm/common@44 -- $ pid=5468 00:03:21.950 14:36:00 -- pm/common@50 -- $ kill -TERM 5468 00:03:21.950 14:36:00 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:21.950 14:36:00 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:22.210 14:36:00 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:22.210 14:36:00 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:22.210 14:36:00 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:22.210 14:36:00 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:22.210 14:36:00 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:22.210 14:36:00 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:22.210 14:36:00 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:22.210 14:36:00 -- scripts/common.sh@336 -- # IFS=.-: 00:03:22.210 14:36:00 -- scripts/common.sh@336 -- # read -ra ver1 00:03:22.210 14:36:00 -- scripts/common.sh@337 -- # IFS=.-: 00:03:22.210 14:36:00 -- scripts/common.sh@337 -- # read -ra ver2 00:03:22.210 14:36:00 -- scripts/common.sh@338 -- # local 'op=<' 00:03:22.210 14:36:00 -- scripts/common.sh@340 -- # ver1_l=2 00:03:22.210 14:36:00 -- scripts/common.sh@341 -- # ver2_l=1 00:03:22.210 14:36:00 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:22.210 14:36:00 -- scripts/common.sh@344 -- # case "$op" in 00:03:22.210 14:36:00 -- scripts/common.sh@345 -- # : 1 00:03:22.210 14:36:00 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:22.210 14:36:00 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:22.210 14:36:00 -- scripts/common.sh@365 -- # decimal 1 00:03:22.210 14:36:00 -- scripts/common.sh@353 -- # local d=1 00:03:22.210 14:36:00 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:22.210 14:36:00 -- scripts/common.sh@355 -- # echo 1 00:03:22.210 14:36:00 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:22.210 14:36:00 -- scripts/common.sh@366 -- # decimal 2 00:03:22.210 14:36:00 -- scripts/common.sh@353 -- # local d=2 00:03:22.210 14:36:00 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:22.210 14:36:00 -- scripts/common.sh@355 -- # echo 2 00:03:22.210 14:36:00 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:22.210 14:36:00 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:22.210 14:36:00 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:22.210 14:36:00 -- scripts/common.sh@368 -- # return 0 00:03:22.210 14:36:00 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:22.210 14:36:00 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:22.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.210 --rc genhtml_branch_coverage=1 00:03:22.210 --rc genhtml_function_coverage=1 00:03:22.210 --rc genhtml_legend=1 00:03:22.210 --rc geninfo_all_blocks=1 00:03:22.210 --rc geninfo_unexecuted_blocks=1 00:03:22.210 00:03:22.210 ' 00:03:22.210 14:36:00 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:22.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.210 --rc genhtml_branch_coverage=1 00:03:22.210 --rc genhtml_function_coverage=1 00:03:22.210 --rc genhtml_legend=1 00:03:22.210 --rc geninfo_all_blocks=1 00:03:22.210 --rc geninfo_unexecuted_blocks=1 00:03:22.210 00:03:22.210 ' 00:03:22.210 14:36:00 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:22.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.210 --rc genhtml_branch_coverage=1 00:03:22.210 --rc genhtml_function_coverage=1 00:03:22.210 --rc genhtml_legend=1 00:03:22.210 --rc geninfo_all_blocks=1 00:03:22.210 --rc geninfo_unexecuted_blocks=1 00:03:22.210 00:03:22.210 ' 00:03:22.210 14:36:00 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:22.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.210 --rc genhtml_branch_coverage=1 00:03:22.210 --rc genhtml_function_coverage=1 00:03:22.210 --rc genhtml_legend=1 00:03:22.210 --rc geninfo_all_blocks=1 00:03:22.210 --rc geninfo_unexecuted_blocks=1 00:03:22.210 00:03:22.210 ' 00:03:22.210 14:36:00 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:22.210 14:36:00 -- nvmf/common.sh@7 -- # uname -s 00:03:22.210 14:36:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:22.210 14:36:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:22.210 14:36:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:22.210 14:36:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:22.210 14:36:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:22.210 14:36:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:22.210 14:36:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:22.210 14:36:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:22.210 14:36:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:22.210 14:36:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:22.210 14:36:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77e609b4-a18d-4719-b7e6-68133c864077 00:03:22.210 14:36:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=77e609b4-a18d-4719-b7e6-68133c864077 00:03:22.210 14:36:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:22.210 14:36:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:22.210 14:36:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:22.210 14:36:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:22.210 14:36:00 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:22.210 14:36:00 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:22.210 14:36:00 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:22.210 14:36:00 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:22.210 14:36:00 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:22.210 14:36:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:22.210 14:36:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:22.211 14:36:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:22.211 14:36:00 -- paths/export.sh@5 -- # export PATH 00:03:22.211 14:36:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:22.211 14:36:00 -- nvmf/common.sh@51 -- # : 0 00:03:22.211 14:36:00 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:22.211 14:36:00 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:22.211 14:36:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:22.211 14:36:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:22.211 14:36:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:22.211 14:36:00 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:22.211 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:22.211 14:36:00 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:22.211 14:36:00 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:22.211 14:36:00 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:22.211 14:36:00 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:22.211 14:36:00 -- spdk/autotest.sh@32 -- # uname -s 00:03:22.211 14:36:00 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:22.211 14:36:00 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:22.211 14:36:00 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:22.211 14:36:00 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:22.211 14:36:00 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:22.211 14:36:00 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:22.470 14:36:00 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:22.470 14:36:00 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:22.470 14:36:00 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:22.470 14:36:00 -- spdk/autotest.sh@48 -- # udevadm_pid=55672 00:03:22.470 14:36:00 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:22.470 14:36:00 -- pm/common@17 -- # local monitor 00:03:22.470 14:36:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.470 14:36:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.471 14:36:00 -- pm/common@21 -- # date +%s 00:03:22.471 14:36:00 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733754960 00:03:22.471 14:36:00 -- pm/common@25 -- # sleep 1 00:03:22.471 14:36:00 -- pm/common@21 -- # date +%s 00:03:22.471 14:36:00 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733754960 00:03:22.471 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733754960_collect-cpu-load.pm.log 00:03:22.471 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733754960_collect-vmstat.pm.log 00:03:23.407 14:36:01 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:23.407 14:36:01 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:23.407 14:36:01 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:23.407 14:36:01 -- common/autotest_common.sh@10 -- # set +x 00:03:23.407 14:36:01 -- spdk/autotest.sh@59 -- # create_test_list 00:03:23.407 14:36:01 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:23.407 14:36:01 -- common/autotest_common.sh@10 -- # set +x 00:03:23.407 14:36:01 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:23.407 14:36:01 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:23.407 14:36:01 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:23.407 14:36:01 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:23.407 14:36:01 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:23.407 14:36:01 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:23.407 14:36:01 -- common/autotest_common.sh@1457 -- # uname 00:03:23.407 14:36:01 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:23.407 14:36:01 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:23.407 14:36:01 -- common/autotest_common.sh@1477 -- # uname 00:03:23.407 14:36:01 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:23.407 14:36:01 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:23.407 14:36:01 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:23.667 lcov: LCOV version 1.15 00:03:23.667 14:36:01 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:41.765 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:41.765 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:56.663 14:36:34 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:56.663 14:36:34 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:56.663 14:36:34 -- common/autotest_common.sh@10 -- # set +x 00:03:56.663 14:36:34 -- spdk/autotest.sh@78 -- # rm -f 00:03:56.663 14:36:34 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:57.599 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:57.599 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:57.599 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:57.599 14:36:35 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:57.599 14:36:35 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:57.599 14:36:35 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:57.599 14:36:35 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:57.599 14:36:35 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:57.599 14:36:35 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:57.599 14:36:35 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:57.599 14:36:35 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:03:57.599 14:36:35 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:57.599 14:36:35 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:57.599 14:36:35 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:57.599 14:36:35 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:57.599 14:36:35 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:57.599 14:36:35 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:57.599 14:36:35 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:03:57.599 14:36:35 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:57.599 14:36:35 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:03:57.599 14:36:35 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:03:57.599 14:36:35 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:57.599 14:36:35 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:57.599 14:36:35 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:57.599 14:36:35 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:03:57.599 14:36:35 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:03:57.599 14:36:35 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:57.599 14:36:35 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:57.599 14:36:35 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:57.599 14:36:35 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:03:57.599 14:36:35 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:03:57.599 14:36:35 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:57.599 14:36:35 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:57.599 14:36:35 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:57.599 14:36:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:57.599 14:36:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:57.599 14:36:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:57.599 14:36:35 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:57.599 14:36:35 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:57.599 No valid GPT data, bailing 00:03:57.599 14:36:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:57.599 14:36:35 -- scripts/common.sh@394 -- # pt= 00:03:57.599 14:36:35 -- scripts/common.sh@395 -- # return 1 00:03:57.599 14:36:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:57.599 1+0 records in 00:03:57.599 1+0 records out 00:03:57.599 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00436239 s, 240 MB/s 00:03:57.599 14:36:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:57.599 14:36:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:57.599 14:36:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:57.599 14:36:35 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:57.599 14:36:35 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:57.599 No valid GPT data, bailing 00:03:57.599 14:36:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:57.858 14:36:35 -- scripts/common.sh@394 -- # pt= 00:03:57.858 14:36:35 -- scripts/common.sh@395 -- # return 1 00:03:57.858 14:36:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:57.858 1+0 records in 00:03:57.858 1+0 records out 00:03:57.858 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00382672 s, 274 MB/s 00:03:57.858 14:36:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:57.858 14:36:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:57.858 14:36:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:03:57.858 14:36:35 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:03:57.858 14:36:35 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:57.858 No valid GPT data, bailing 00:03:57.858 14:36:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:57.858 14:36:35 -- scripts/common.sh@394 -- # pt= 00:03:57.858 14:36:35 -- scripts/common.sh@395 -- # return 1 00:03:57.858 14:36:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:57.858 1+0 records in 00:03:57.858 1+0 records out 00:03:57.858 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00427348 s, 245 MB/s 00:03:57.858 14:36:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:57.858 14:36:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:57.858 14:36:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:03:57.858 14:36:35 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:03:57.858 14:36:35 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:57.858 No valid GPT data, bailing 00:03:57.858 14:36:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:57.858 14:36:35 -- scripts/common.sh@394 -- # pt= 00:03:57.858 14:36:35 -- scripts/common.sh@395 -- # return 1 00:03:57.858 14:36:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:57.858 1+0 records in 00:03:57.858 1+0 records out 00:03:57.858 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00562336 s, 186 MB/s 00:03:57.858 14:36:35 -- spdk/autotest.sh@105 -- # sync 00:03:57.858 14:36:35 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:57.858 14:36:35 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:57.858 14:36:35 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:00.392 14:36:38 -- spdk/autotest.sh@111 -- # uname -s 00:04:00.392 14:36:38 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:00.392 14:36:38 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:00.392 14:36:38 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:01.329 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:01.329 Hugepages 00:04:01.329 node hugesize free / total 00:04:01.329 node0 1048576kB 0 / 0 00:04:01.329 node0 2048kB 0 / 0 00:04:01.329 00:04:01.329 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:01.329 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:01.588 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:01.588 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:01.588 14:36:39 -- spdk/autotest.sh@117 -- # uname -s 00:04:01.588 14:36:39 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:01.588 14:36:39 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:01.588 14:36:39 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:02.525 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:02.525 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:02.525 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:02.525 14:36:40 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:03.904 14:36:41 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:03.904 14:36:41 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:03.904 14:36:41 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:03.904 14:36:41 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:03.904 14:36:41 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:03.904 14:36:41 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:03.904 14:36:41 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:03.904 14:36:41 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:03.904 14:36:41 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:03.904 14:36:41 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:03.904 14:36:41 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:03.904 14:36:41 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:04.163 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.163 Waiting for block devices as requested 00:04:04.163 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:04.423 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:04.423 14:36:42 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:04.423 14:36:42 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:04.423 14:36:42 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:04.423 14:36:42 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:04.423 14:36:42 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:04.423 14:36:42 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:04.423 14:36:42 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:04.423 14:36:42 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:04.423 14:36:42 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:04.423 14:36:42 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:04.423 14:36:42 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:04.423 14:36:42 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:04.423 14:36:42 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:04.423 14:36:42 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:04.423 14:36:42 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:04.423 14:36:42 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:04.423 14:36:42 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:04.423 14:36:42 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:04.423 14:36:42 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:04.423 14:36:42 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:04.423 14:36:42 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:04.423 14:36:42 -- common/autotest_common.sh@1543 -- # continue 00:04:04.423 14:36:42 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:04.423 14:36:42 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:04.423 14:36:42 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:04.423 14:36:42 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:04.423 14:36:42 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:04.423 14:36:42 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:04.423 14:36:42 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:04.423 14:36:42 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:04.423 14:36:42 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:04.423 14:36:42 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:04.423 14:36:42 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:04.423 14:36:42 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:04.423 14:36:42 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:04.423 14:36:42 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:04.423 14:36:42 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:04.423 14:36:42 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:04.423 14:36:42 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:04.423 14:36:42 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:04.423 14:36:42 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:04.423 14:36:42 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:04.423 14:36:42 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:04.423 14:36:42 -- common/autotest_common.sh@1543 -- # continue 00:04:04.423 14:36:42 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:04.423 14:36:42 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:04.423 14:36:42 -- common/autotest_common.sh@10 -- # set +x 00:04:04.423 14:36:42 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:04.423 14:36:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:04.423 14:36:42 -- common/autotest_common.sh@10 -- # set +x 00:04:04.683 14:36:42 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:05.251 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:05.511 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:05.511 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:05.511 14:36:43 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:05.511 14:36:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:05.511 14:36:43 -- common/autotest_common.sh@10 -- # set +x 00:04:05.511 14:36:43 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:05.511 14:36:43 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:05.511 14:36:43 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:05.511 14:36:43 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:05.511 14:36:43 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:05.511 14:36:43 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:05.511 14:36:43 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:05.511 14:36:43 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:05.511 14:36:43 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:05.511 14:36:43 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:05.511 14:36:43 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:05.511 14:36:43 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:05.511 14:36:43 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:05.770 14:36:43 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:05.770 14:36:43 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:05.770 14:36:43 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:05.770 14:36:43 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:05.770 14:36:43 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:05.770 14:36:43 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:05.770 14:36:43 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:05.770 14:36:43 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:05.770 14:36:43 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:05.770 14:36:43 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:05.770 14:36:43 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:05.770 14:36:43 -- common/autotest_common.sh@1572 -- # return 0 00:04:05.770 14:36:43 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:05.770 14:36:43 -- common/autotest_common.sh@1580 -- # return 0 00:04:05.770 14:36:43 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:05.770 14:36:43 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:05.770 14:36:43 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:05.770 14:36:43 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:05.770 14:36:43 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:05.770 14:36:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:05.770 14:36:43 -- common/autotest_common.sh@10 -- # set +x 00:04:05.770 14:36:43 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:05.770 14:36:43 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:05.770 14:36:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.770 14:36:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.770 14:36:43 -- common/autotest_common.sh@10 -- # set +x 00:04:05.770 ************************************ 00:04:05.770 START TEST env 00:04:05.770 ************************************ 00:04:05.770 14:36:43 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:05.770 * Looking for test storage... 00:04:05.770 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:05.770 14:36:43 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:05.770 14:36:43 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:05.770 14:36:43 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:06.031 14:36:43 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:06.031 14:36:43 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:06.031 14:36:43 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:06.031 14:36:43 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:06.031 14:36:43 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:06.031 14:36:43 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:06.031 14:36:43 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:06.031 14:36:43 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:06.031 14:36:43 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:06.031 14:36:43 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:06.031 14:36:43 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:06.031 14:36:43 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:06.031 14:36:43 env -- scripts/common.sh@344 -- # case "$op" in 00:04:06.031 14:36:43 env -- scripts/common.sh@345 -- # : 1 00:04:06.031 14:36:43 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:06.031 14:36:43 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:06.031 14:36:43 env -- scripts/common.sh@365 -- # decimal 1 00:04:06.031 14:36:43 env -- scripts/common.sh@353 -- # local d=1 00:04:06.031 14:36:43 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:06.031 14:36:43 env -- scripts/common.sh@355 -- # echo 1 00:04:06.031 14:36:43 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:06.031 14:36:43 env -- scripts/common.sh@366 -- # decimal 2 00:04:06.031 14:36:43 env -- scripts/common.sh@353 -- # local d=2 00:04:06.031 14:36:43 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:06.031 14:36:43 env -- scripts/common.sh@355 -- # echo 2 00:04:06.031 14:36:43 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:06.031 14:36:43 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:06.031 14:36:43 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:06.031 14:36:43 env -- scripts/common.sh@368 -- # return 0 00:04:06.031 14:36:43 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:06.031 14:36:43 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:06.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.031 --rc genhtml_branch_coverage=1 00:04:06.031 --rc genhtml_function_coverage=1 00:04:06.031 --rc genhtml_legend=1 00:04:06.031 --rc geninfo_all_blocks=1 00:04:06.031 --rc geninfo_unexecuted_blocks=1 00:04:06.031 00:04:06.031 ' 00:04:06.031 14:36:43 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:06.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.031 --rc genhtml_branch_coverage=1 00:04:06.031 --rc genhtml_function_coverage=1 00:04:06.031 --rc genhtml_legend=1 00:04:06.031 --rc geninfo_all_blocks=1 00:04:06.031 --rc geninfo_unexecuted_blocks=1 00:04:06.031 00:04:06.031 ' 00:04:06.031 14:36:43 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:06.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.031 --rc genhtml_branch_coverage=1 00:04:06.031 --rc genhtml_function_coverage=1 00:04:06.031 --rc genhtml_legend=1 00:04:06.031 --rc geninfo_all_blocks=1 00:04:06.031 --rc geninfo_unexecuted_blocks=1 00:04:06.031 00:04:06.031 ' 00:04:06.031 14:36:43 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:06.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.031 --rc genhtml_branch_coverage=1 00:04:06.031 --rc genhtml_function_coverage=1 00:04:06.031 --rc genhtml_legend=1 00:04:06.031 --rc geninfo_all_blocks=1 00:04:06.031 --rc geninfo_unexecuted_blocks=1 00:04:06.031 00:04:06.031 ' 00:04:06.031 14:36:43 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:06.031 14:36:43 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.031 14:36:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.031 14:36:43 env -- common/autotest_common.sh@10 -- # set +x 00:04:06.031 ************************************ 00:04:06.031 START TEST env_memory 00:04:06.031 ************************************ 00:04:06.031 14:36:43 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:06.031 00:04:06.031 00:04:06.031 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.031 http://cunit.sourceforge.net/ 00:04:06.031 00:04:06.031 00:04:06.031 Suite: memory 00:04:06.031 Test: alloc and free memory map ...[2024-12-09 14:36:44.003796] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:06.031 passed 00:04:06.031 Test: mem map translation ...[2024-12-09 14:36:44.058534] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:06.031 [2024-12-09 14:36:44.058626] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:06.031 [2024-12-09 14:36:44.058715] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:06.031 [2024-12-09 14:36:44.058741] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:06.031 passed 00:04:06.031 Test: mem map registration ...[2024-12-09 14:36:44.139245] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:06.031 [2024-12-09 14:36:44.139326] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:06.291 passed 00:04:06.291 Test: mem map adjacent registrations ...passed 00:04:06.291 00:04:06.291 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.291 suites 1 1 n/a 0 0 00:04:06.291 tests 4 4 4 0 0 00:04:06.291 asserts 152 152 152 0 n/a 00:04:06.291 00:04:06.291 Elapsed time = 0.295 seconds 00:04:06.291 00:04:06.291 real 0m0.339s 00:04:06.291 user 0m0.305s 00:04:06.291 sys 0m0.022s 00:04:06.291 14:36:44 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.291 14:36:44 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:06.291 ************************************ 00:04:06.291 END TEST env_memory 00:04:06.291 ************************************ 00:04:06.291 14:36:44 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:06.291 14:36:44 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.291 14:36:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.291 14:36:44 env -- common/autotest_common.sh@10 -- # set +x 00:04:06.291 ************************************ 00:04:06.291 START TEST env_vtophys 00:04:06.291 ************************************ 00:04:06.291 14:36:44 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:06.291 EAL: lib.eal log level changed from notice to debug 00:04:06.291 EAL: Detected lcore 0 as core 0 on socket 0 00:04:06.291 EAL: Detected lcore 1 as core 0 on socket 0 00:04:06.291 EAL: Detected lcore 2 as core 0 on socket 0 00:04:06.291 EAL: Detected lcore 3 as core 0 on socket 0 00:04:06.291 EAL: Detected lcore 4 as core 0 on socket 0 00:04:06.291 EAL: Detected lcore 5 as core 0 on socket 0 00:04:06.291 EAL: Detected lcore 6 as core 0 on socket 0 00:04:06.291 EAL: Detected lcore 7 as core 0 on socket 0 00:04:06.291 EAL: Detected lcore 8 as core 0 on socket 0 00:04:06.291 EAL: Detected lcore 9 as core 0 on socket 0 00:04:06.291 EAL: Maximum logical cores by configuration: 128 00:04:06.291 EAL: Detected CPU lcores: 10 00:04:06.291 EAL: Detected NUMA nodes: 1 00:04:06.291 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:06.291 EAL: Detected shared linkage of DPDK 00:04:06.551 EAL: No shared files mode enabled, IPC will be disabled 00:04:06.551 EAL: Selected IOVA mode 'PA' 00:04:06.551 EAL: Probing VFIO support... 00:04:06.551 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:06.551 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:06.551 EAL: Ask a virtual area of 0x2e000 bytes 00:04:06.551 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:06.551 EAL: Setting up physically contiguous memory... 00:04:06.551 EAL: Setting maximum number of open files to 524288 00:04:06.551 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:06.551 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:06.551 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.551 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:06.551 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:06.551 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.551 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:06.551 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:06.551 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.551 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:06.551 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:06.551 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.551 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:06.551 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:06.551 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.551 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:06.551 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:06.551 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.551 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:06.551 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:06.551 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.551 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:06.551 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:06.551 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.551 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:06.551 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:06.551 EAL: Hugepages will be freed exactly as allocated. 00:04:06.551 EAL: No shared files mode enabled, IPC is disabled 00:04:06.551 EAL: No shared files mode enabled, IPC is disabled 00:04:06.551 EAL: TSC frequency is ~2290000 KHz 00:04:06.551 EAL: Main lcore 0 is ready (tid=7f4cefb18a40;cpuset=[0]) 00:04:06.551 EAL: Trying to obtain current memory policy. 00:04:06.551 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.551 EAL: Restoring previous memory policy: 0 00:04:06.551 EAL: request: mp_malloc_sync 00:04:06.551 EAL: No shared files mode enabled, IPC is disabled 00:04:06.551 EAL: Heap on socket 0 was expanded by 2MB 00:04:06.551 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:06.551 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:06.551 EAL: Mem event callback 'spdk:(nil)' registered 00:04:06.551 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:06.551 00:04:06.551 00:04:06.551 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.551 http://cunit.sourceforge.net/ 00:04:06.551 00:04:06.551 00:04:06.551 Suite: components_suite 00:04:07.120 Test: vtophys_malloc_test ...passed 00:04:07.120 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:07.120 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.120 EAL: Restoring previous memory policy: 4 00:04:07.120 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.120 EAL: request: mp_malloc_sync 00:04:07.120 EAL: No shared files mode enabled, IPC is disabled 00:04:07.120 EAL: Heap on socket 0 was expanded by 4MB 00:04:07.120 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.120 EAL: request: mp_malloc_sync 00:04:07.120 EAL: No shared files mode enabled, IPC is disabled 00:04:07.120 EAL: Heap on socket 0 was shrunk by 4MB 00:04:07.120 EAL: Trying to obtain current memory policy. 00:04:07.120 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.120 EAL: Restoring previous memory policy: 4 00:04:07.120 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.120 EAL: request: mp_malloc_sync 00:04:07.120 EAL: No shared files mode enabled, IPC is disabled 00:04:07.120 EAL: Heap on socket 0 was expanded by 6MB 00:04:07.120 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.120 EAL: request: mp_malloc_sync 00:04:07.120 EAL: No shared files mode enabled, IPC is disabled 00:04:07.120 EAL: Heap on socket 0 was shrunk by 6MB 00:04:07.120 EAL: Trying to obtain current memory policy. 00:04:07.120 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.120 EAL: Restoring previous memory policy: 4 00:04:07.120 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.120 EAL: request: mp_malloc_sync 00:04:07.120 EAL: No shared files mode enabled, IPC is disabled 00:04:07.120 EAL: Heap on socket 0 was expanded by 10MB 00:04:07.120 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.120 EAL: request: mp_malloc_sync 00:04:07.120 EAL: No shared files mode enabled, IPC is disabled 00:04:07.120 EAL: Heap on socket 0 was shrunk by 10MB 00:04:07.120 EAL: Trying to obtain current memory policy. 00:04:07.120 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.120 EAL: Restoring previous memory policy: 4 00:04:07.120 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.120 EAL: request: mp_malloc_sync 00:04:07.120 EAL: No shared files mode enabled, IPC is disabled 00:04:07.120 EAL: Heap on socket 0 was expanded by 18MB 00:04:07.120 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.120 EAL: request: mp_malloc_sync 00:04:07.120 EAL: No shared files mode enabled, IPC is disabled 00:04:07.120 EAL: Heap on socket 0 was shrunk by 18MB 00:04:07.120 EAL: Trying to obtain current memory policy. 00:04:07.120 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.120 EAL: Restoring previous memory policy: 4 00:04:07.120 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.120 EAL: request: mp_malloc_sync 00:04:07.120 EAL: No shared files mode enabled, IPC is disabled 00:04:07.120 EAL: Heap on socket 0 was expanded by 34MB 00:04:07.120 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.120 EAL: request: mp_malloc_sync 00:04:07.120 EAL: No shared files mode enabled, IPC is disabled 00:04:07.120 EAL: Heap on socket 0 was shrunk by 34MB 00:04:07.379 EAL: Trying to obtain current memory policy. 00:04:07.379 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.379 EAL: Restoring previous memory policy: 4 00:04:07.379 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.379 EAL: request: mp_malloc_sync 00:04:07.379 EAL: No shared files mode enabled, IPC is disabled 00:04:07.379 EAL: Heap on socket 0 was expanded by 66MB 00:04:07.379 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.379 EAL: request: mp_malloc_sync 00:04:07.379 EAL: No shared files mode enabled, IPC is disabled 00:04:07.379 EAL: Heap on socket 0 was shrunk by 66MB 00:04:07.639 EAL: Trying to obtain current memory policy. 00:04:07.639 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.639 EAL: Restoring previous memory policy: 4 00:04:07.639 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.639 EAL: request: mp_malloc_sync 00:04:07.639 EAL: No shared files mode enabled, IPC is disabled 00:04:07.639 EAL: Heap on socket 0 was expanded by 130MB 00:04:07.898 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.898 EAL: request: mp_malloc_sync 00:04:07.898 EAL: No shared files mode enabled, IPC is disabled 00:04:07.898 EAL: Heap on socket 0 was shrunk by 130MB 00:04:08.158 EAL: Trying to obtain current memory policy. 00:04:08.158 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.158 EAL: Restoring previous memory policy: 4 00:04:08.158 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.158 EAL: request: mp_malloc_sync 00:04:08.158 EAL: No shared files mode enabled, IPC is disabled 00:04:08.158 EAL: Heap on socket 0 was expanded by 258MB 00:04:08.726 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.726 EAL: request: mp_malloc_sync 00:04:08.726 EAL: No shared files mode enabled, IPC is disabled 00:04:08.726 EAL: Heap on socket 0 was shrunk by 258MB 00:04:09.308 EAL: Trying to obtain current memory policy. 00:04:09.308 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.308 EAL: Restoring previous memory policy: 4 00:04:09.308 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.308 EAL: request: mp_malloc_sync 00:04:09.308 EAL: No shared files mode enabled, IPC is disabled 00:04:09.308 EAL: Heap on socket 0 was expanded by 514MB 00:04:10.693 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.693 EAL: request: mp_malloc_sync 00:04:10.693 EAL: No shared files mode enabled, IPC is disabled 00:04:10.693 EAL: Heap on socket 0 was shrunk by 514MB 00:04:11.633 EAL: Trying to obtain current memory policy. 00:04:11.633 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.633 EAL: Restoring previous memory policy: 4 00:04:11.633 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.633 EAL: request: mp_malloc_sync 00:04:11.633 EAL: No shared files mode enabled, IPC is disabled 00:04:11.633 EAL: Heap on socket 0 was expanded by 1026MB 00:04:13.540 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.799 EAL: request: mp_malloc_sync 00:04:13.799 EAL: No shared files mode enabled, IPC is disabled 00:04:13.799 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:15.705 passed 00:04:15.705 00:04:15.705 Run Summary: Type Total Ran Passed Failed Inactive 00:04:15.705 suites 1 1 n/a 0 0 00:04:15.705 tests 2 2 2 0 0 00:04:15.705 asserts 5705 5705 5705 0 n/a 00:04:15.705 00:04:15.705 Elapsed time = 9.050 seconds 00:04:15.705 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.705 EAL: request: mp_malloc_sync 00:04:15.705 EAL: No shared files mode enabled, IPC is disabled 00:04:15.705 EAL: Heap on socket 0 was shrunk by 2MB 00:04:15.705 EAL: No shared files mode enabled, IPC is disabled 00:04:15.705 EAL: No shared files mode enabled, IPC is disabled 00:04:15.705 EAL: No shared files mode enabled, IPC is disabled 00:04:15.705 00:04:15.705 real 0m9.387s 00:04:15.705 user 0m8.358s 00:04:15.705 sys 0m0.858s 00:04:15.705 14:36:53 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.705 14:36:53 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:15.705 ************************************ 00:04:15.705 END TEST env_vtophys 00:04:15.705 ************************************ 00:04:15.705 14:36:53 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:15.705 14:36:53 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.705 14:36:53 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.705 14:36:53 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.705 ************************************ 00:04:15.705 START TEST env_pci 00:04:15.705 ************************************ 00:04:15.705 14:36:53 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:15.705 00:04:15.705 00:04:15.705 CUnit - A unit testing framework for C - Version 2.1-3 00:04:15.705 http://cunit.sourceforge.net/ 00:04:15.705 00:04:15.705 00:04:15.705 Suite: pci 00:04:15.705 Test: pci_hook ...[2024-12-09 14:36:53.823162] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58027 has claimed it 00:04:15.965 passed 00:04:15.965 00:04:15.965 Run Summary: Type Total Ran Passed Failed Inactive 00:04:15.965 suites 1 1 n/a 0 0 00:04:15.965 tests 1 1 1 0 0 00:04:15.965 asserts 25 25 25 0 n/a 00:04:15.965 00:04:15.965 Elapsed time = 0.007 seconds 00:04:15.965 EAL: Cannot find device (10000:00:01.0) 00:04:15.965 EAL: Failed to attach device on primary process 00:04:15.965 00:04:15.965 real 0m0.107s 00:04:15.965 user 0m0.045s 00:04:15.965 sys 0m0.061s 00:04:15.965 14:36:53 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.965 14:36:53 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:15.965 ************************************ 00:04:15.965 END TEST env_pci 00:04:15.965 ************************************ 00:04:15.965 14:36:53 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:15.965 14:36:53 env -- env/env.sh@15 -- # uname 00:04:15.965 14:36:53 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:15.965 14:36:53 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:15.965 14:36:53 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:15.965 14:36:53 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:15.965 14:36:53 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.965 14:36:53 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.965 ************************************ 00:04:15.965 START TEST env_dpdk_post_init 00:04:15.965 ************************************ 00:04:15.965 14:36:53 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:15.965 EAL: Detected CPU lcores: 10 00:04:15.965 EAL: Detected NUMA nodes: 1 00:04:15.965 EAL: Detected shared linkage of DPDK 00:04:15.965 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:15.965 EAL: Selected IOVA mode 'PA' 00:04:16.225 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:16.225 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:16.225 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:16.225 Starting DPDK initialization... 00:04:16.225 Starting SPDK post initialization... 00:04:16.225 SPDK NVMe probe 00:04:16.225 Attaching to 0000:00:10.0 00:04:16.225 Attaching to 0000:00:11.0 00:04:16.225 Attached to 0000:00:10.0 00:04:16.225 Attached to 0000:00:11.0 00:04:16.225 Cleaning up... 00:04:16.225 00:04:16.225 real 0m0.296s 00:04:16.225 user 0m0.109s 00:04:16.225 sys 0m0.088s 00:04:16.225 14:36:54 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.225 14:36:54 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:16.225 ************************************ 00:04:16.225 END TEST env_dpdk_post_init 00:04:16.225 ************************************ 00:04:16.225 14:36:54 env -- env/env.sh@26 -- # uname 00:04:16.225 14:36:54 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:16.225 14:36:54 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:16.225 14:36:54 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.225 14:36:54 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.225 14:36:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:16.225 ************************************ 00:04:16.225 START TEST env_mem_callbacks 00:04:16.225 ************************************ 00:04:16.225 14:36:54 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:16.485 EAL: Detected CPU lcores: 10 00:04:16.485 EAL: Detected NUMA nodes: 1 00:04:16.485 EAL: Detected shared linkage of DPDK 00:04:16.485 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:16.485 EAL: Selected IOVA mode 'PA' 00:04:16.485 00:04:16.485 00:04:16.485 CUnit - A unit testing framework for C - Version 2.1-3 00:04:16.485 http://cunit.sourceforge.net/ 00:04:16.485 00:04:16.485 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:16.485 00:04:16.485 Suite: memory 00:04:16.485 Test: test ... 00:04:16.485 register 0x200000200000 2097152 00:04:16.485 malloc 3145728 00:04:16.485 register 0x200000400000 4194304 00:04:16.485 buf 0x2000004fffc0 len 3145728 PASSED 00:04:16.485 malloc 64 00:04:16.485 buf 0x2000004ffec0 len 64 PASSED 00:04:16.485 malloc 4194304 00:04:16.485 register 0x200000800000 6291456 00:04:16.485 buf 0x2000009fffc0 len 4194304 PASSED 00:04:16.485 free 0x2000004fffc0 3145728 00:04:16.485 free 0x2000004ffec0 64 00:04:16.485 unregister 0x200000400000 4194304 PASSED 00:04:16.485 free 0x2000009fffc0 4194304 00:04:16.485 unregister 0x200000800000 6291456 PASSED 00:04:16.485 malloc 8388608 00:04:16.485 register 0x200000400000 10485760 00:04:16.485 buf 0x2000005fffc0 len 8388608 PASSED 00:04:16.485 free 0x2000005fffc0 8388608 00:04:16.485 unregister 0x200000400000 10485760 PASSED 00:04:16.485 passed 00:04:16.485 00:04:16.485 Run Summary: Type Total Ran Passed Failed Inactive 00:04:16.485 suites 1 1 n/a 0 0 00:04:16.485 tests 1 1 1 0 0 00:04:16.485 asserts 15 15 15 0 n/a 00:04:16.485 00:04:16.485 Elapsed time = 0.089 seconds 00:04:16.744 00:04:16.744 real 0m0.291s 00:04:16.744 user 0m0.108s 00:04:16.744 sys 0m0.080s 00:04:16.744 14:36:54 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.744 14:36:54 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:16.744 ************************************ 00:04:16.744 END TEST env_mem_callbacks 00:04:16.744 ************************************ 00:04:16.744 00:04:16.744 real 0m10.973s 00:04:16.744 user 0m9.151s 00:04:16.744 sys 0m1.456s 00:04:16.744 14:36:54 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.744 14:36:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:16.744 ************************************ 00:04:16.744 END TEST env 00:04:16.744 ************************************ 00:04:16.744 14:36:54 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:16.744 14:36:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.744 14:36:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.744 14:36:54 -- common/autotest_common.sh@10 -- # set +x 00:04:16.744 ************************************ 00:04:16.744 START TEST rpc 00:04:16.744 ************************************ 00:04:16.744 14:36:54 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:16.744 * Looking for test storage... 00:04:16.744 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:16.744 14:36:54 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:16.744 14:36:54 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:16.744 14:36:54 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:17.002 14:36:54 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:17.002 14:36:54 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:17.002 14:36:54 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:17.002 14:36:54 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:17.002 14:36:54 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:17.002 14:36:54 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:17.002 14:36:54 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:17.002 14:36:54 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:17.002 14:36:54 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:17.002 14:36:54 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:17.002 14:36:54 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:17.002 14:36:54 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:17.003 14:36:54 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:17.003 14:36:54 rpc -- scripts/common.sh@345 -- # : 1 00:04:17.003 14:36:54 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:17.003 14:36:54 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:17.003 14:36:54 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:17.003 14:36:54 rpc -- scripts/common.sh@353 -- # local d=1 00:04:17.003 14:36:54 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:17.003 14:36:54 rpc -- scripts/common.sh@355 -- # echo 1 00:04:17.003 14:36:54 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:17.003 14:36:54 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:17.003 14:36:54 rpc -- scripts/common.sh@353 -- # local d=2 00:04:17.003 14:36:54 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:17.003 14:36:54 rpc -- scripts/common.sh@355 -- # echo 2 00:04:17.003 14:36:54 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:17.003 14:36:54 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:17.003 14:36:54 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:17.003 14:36:54 rpc -- scripts/common.sh@368 -- # return 0 00:04:17.003 14:36:54 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:17.003 14:36:54 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:17.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.003 --rc genhtml_branch_coverage=1 00:04:17.003 --rc genhtml_function_coverage=1 00:04:17.003 --rc genhtml_legend=1 00:04:17.003 --rc geninfo_all_blocks=1 00:04:17.003 --rc geninfo_unexecuted_blocks=1 00:04:17.003 00:04:17.003 ' 00:04:17.003 14:36:54 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:17.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.003 --rc genhtml_branch_coverage=1 00:04:17.003 --rc genhtml_function_coverage=1 00:04:17.003 --rc genhtml_legend=1 00:04:17.003 --rc geninfo_all_blocks=1 00:04:17.003 --rc geninfo_unexecuted_blocks=1 00:04:17.003 00:04:17.003 ' 00:04:17.003 14:36:54 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:17.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.003 --rc genhtml_branch_coverage=1 00:04:17.003 --rc genhtml_function_coverage=1 00:04:17.003 --rc genhtml_legend=1 00:04:17.003 --rc geninfo_all_blocks=1 00:04:17.003 --rc geninfo_unexecuted_blocks=1 00:04:17.003 00:04:17.003 ' 00:04:17.003 14:36:54 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:17.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.003 --rc genhtml_branch_coverage=1 00:04:17.003 --rc genhtml_function_coverage=1 00:04:17.003 --rc genhtml_legend=1 00:04:17.003 --rc geninfo_all_blocks=1 00:04:17.003 --rc geninfo_unexecuted_blocks=1 00:04:17.003 00:04:17.003 ' 00:04:17.003 14:36:54 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58160 00:04:17.003 14:36:54 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:17.003 14:36:54 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:17.003 14:36:54 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58160 00:04:17.003 14:36:54 rpc -- common/autotest_common.sh@835 -- # '[' -z 58160 ']' 00:04:17.003 14:36:54 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:17.003 14:36:54 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:17.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:17.003 14:36:54 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:17.003 14:36:54 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:17.003 14:36:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.003 [2024-12-09 14:36:55.078446] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:04:17.003 [2024-12-09 14:36:55.078607] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58160 ] 00:04:17.262 [2024-12-09 14:36:55.259855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.522 [2024-12-09 14:36:55.391716] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:17.522 [2024-12-09 14:36:55.391787] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58160' to capture a snapshot of events at runtime. 00:04:17.522 [2024-12-09 14:36:55.391797] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:17.522 [2024-12-09 14:36:55.391807] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:17.522 [2024-12-09 14:36:55.391833] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58160 for offline analysis/debug. 00:04:17.522 [2024-12-09 14:36:55.393281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.463 14:36:56 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:18.463 14:36:56 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:18.463 14:36:56 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:18.463 14:36:56 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:18.463 14:36:56 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:18.463 14:36:56 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:18.463 14:36:56 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.463 14:36:56 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.463 14:36:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.463 ************************************ 00:04:18.463 START TEST rpc_integrity 00:04:18.463 ************************************ 00:04:18.463 14:36:56 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:18.463 14:36:56 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:18.463 14:36:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.463 14:36:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.463 14:36:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.463 14:36:56 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:18.463 14:36:56 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:18.463 14:36:56 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:18.463 14:36:56 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:18.463 14:36:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.463 14:36:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.463 14:36:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.463 14:36:56 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:18.463 14:36:56 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:18.463 14:36:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.463 14:36:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.463 14:36:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.463 14:36:56 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:18.463 { 00:04:18.463 "name": "Malloc0", 00:04:18.463 "aliases": [ 00:04:18.463 "0758435c-ff75-420f-9ddd-2474d4879db8" 00:04:18.463 ], 00:04:18.463 "product_name": "Malloc disk", 00:04:18.463 "block_size": 512, 00:04:18.463 "num_blocks": 16384, 00:04:18.463 "uuid": "0758435c-ff75-420f-9ddd-2474d4879db8", 00:04:18.463 "assigned_rate_limits": { 00:04:18.463 "rw_ios_per_sec": 0, 00:04:18.463 "rw_mbytes_per_sec": 0, 00:04:18.463 "r_mbytes_per_sec": 0, 00:04:18.463 "w_mbytes_per_sec": 0 00:04:18.463 }, 00:04:18.463 "claimed": false, 00:04:18.463 "zoned": false, 00:04:18.463 "supported_io_types": { 00:04:18.463 "read": true, 00:04:18.463 "write": true, 00:04:18.463 "unmap": true, 00:04:18.463 "flush": true, 00:04:18.463 "reset": true, 00:04:18.463 "nvme_admin": false, 00:04:18.463 "nvme_io": false, 00:04:18.463 "nvme_io_md": false, 00:04:18.463 "write_zeroes": true, 00:04:18.463 "zcopy": true, 00:04:18.463 "get_zone_info": false, 00:04:18.463 "zone_management": false, 00:04:18.463 "zone_append": false, 00:04:18.463 "compare": false, 00:04:18.463 "compare_and_write": false, 00:04:18.463 "abort": true, 00:04:18.463 "seek_hole": false, 00:04:18.463 "seek_data": false, 00:04:18.463 "copy": true, 00:04:18.463 "nvme_iov_md": false 00:04:18.463 }, 00:04:18.463 "memory_domains": [ 00:04:18.463 { 00:04:18.463 "dma_device_id": "system", 00:04:18.463 "dma_device_type": 1 00:04:18.463 }, 00:04:18.463 { 00:04:18.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:18.463 "dma_device_type": 2 00:04:18.463 } 00:04:18.463 ], 00:04:18.463 "driver_specific": {} 00:04:18.463 } 00:04:18.463 ]' 00:04:18.463 14:36:56 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:18.463 14:36:56 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:18.463 14:36:56 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:18.463 14:36:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.463 14:36:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.463 [2024-12-09 14:36:56.573443] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:18.463 [2024-12-09 14:36:56.573538] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:18.463 [2024-12-09 14:36:56.573566] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:18.463 [2024-12-09 14:36:56.573598] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:18.463 [2024-12-09 14:36:56.576410] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:18.463 [2024-12-09 14:36:56.576472] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:18.463 Passthru0 00:04:18.463 14:36:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.463 14:36:56 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:18.463 14:36:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.463 14:36:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.723 14:36:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.723 14:36:56 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:18.723 { 00:04:18.723 "name": "Malloc0", 00:04:18.723 "aliases": [ 00:04:18.723 "0758435c-ff75-420f-9ddd-2474d4879db8" 00:04:18.723 ], 00:04:18.723 "product_name": "Malloc disk", 00:04:18.723 "block_size": 512, 00:04:18.723 "num_blocks": 16384, 00:04:18.723 "uuid": "0758435c-ff75-420f-9ddd-2474d4879db8", 00:04:18.723 "assigned_rate_limits": { 00:04:18.723 "rw_ios_per_sec": 0, 00:04:18.723 "rw_mbytes_per_sec": 0, 00:04:18.723 "r_mbytes_per_sec": 0, 00:04:18.723 "w_mbytes_per_sec": 0 00:04:18.723 }, 00:04:18.723 "claimed": true, 00:04:18.723 "claim_type": "exclusive_write", 00:04:18.723 "zoned": false, 00:04:18.723 "supported_io_types": { 00:04:18.723 "read": true, 00:04:18.723 "write": true, 00:04:18.723 "unmap": true, 00:04:18.723 "flush": true, 00:04:18.723 "reset": true, 00:04:18.723 "nvme_admin": false, 00:04:18.723 "nvme_io": false, 00:04:18.723 "nvme_io_md": false, 00:04:18.723 "write_zeroes": true, 00:04:18.723 "zcopy": true, 00:04:18.723 "get_zone_info": false, 00:04:18.723 "zone_management": false, 00:04:18.723 "zone_append": false, 00:04:18.723 "compare": false, 00:04:18.723 "compare_and_write": false, 00:04:18.723 "abort": true, 00:04:18.723 "seek_hole": false, 00:04:18.723 "seek_data": false, 00:04:18.724 "copy": true, 00:04:18.724 "nvme_iov_md": false 00:04:18.724 }, 00:04:18.724 "memory_domains": [ 00:04:18.724 { 00:04:18.724 "dma_device_id": "system", 00:04:18.724 "dma_device_type": 1 00:04:18.724 }, 00:04:18.724 { 00:04:18.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:18.724 "dma_device_type": 2 00:04:18.724 } 00:04:18.724 ], 00:04:18.724 "driver_specific": {} 00:04:18.724 }, 00:04:18.724 { 00:04:18.724 "name": "Passthru0", 00:04:18.724 "aliases": [ 00:04:18.724 "7a54538f-f5d9-5b70-919d-4b7e274d7af5" 00:04:18.724 ], 00:04:18.724 "product_name": "passthru", 00:04:18.724 "block_size": 512, 00:04:18.724 "num_blocks": 16384, 00:04:18.724 "uuid": "7a54538f-f5d9-5b70-919d-4b7e274d7af5", 00:04:18.724 "assigned_rate_limits": { 00:04:18.724 "rw_ios_per_sec": 0, 00:04:18.724 "rw_mbytes_per_sec": 0, 00:04:18.724 "r_mbytes_per_sec": 0, 00:04:18.724 "w_mbytes_per_sec": 0 00:04:18.724 }, 00:04:18.724 "claimed": false, 00:04:18.724 "zoned": false, 00:04:18.724 "supported_io_types": { 00:04:18.724 "read": true, 00:04:18.724 "write": true, 00:04:18.724 "unmap": true, 00:04:18.724 "flush": true, 00:04:18.724 "reset": true, 00:04:18.724 "nvme_admin": false, 00:04:18.724 "nvme_io": false, 00:04:18.724 "nvme_io_md": false, 00:04:18.724 "write_zeroes": true, 00:04:18.724 "zcopy": true, 00:04:18.724 "get_zone_info": false, 00:04:18.724 "zone_management": false, 00:04:18.724 "zone_append": false, 00:04:18.724 "compare": false, 00:04:18.724 "compare_and_write": false, 00:04:18.724 "abort": true, 00:04:18.724 "seek_hole": false, 00:04:18.724 "seek_data": false, 00:04:18.724 "copy": true, 00:04:18.724 "nvme_iov_md": false 00:04:18.724 }, 00:04:18.724 "memory_domains": [ 00:04:18.724 { 00:04:18.724 "dma_device_id": "system", 00:04:18.724 "dma_device_type": 1 00:04:18.724 }, 00:04:18.724 { 00:04:18.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:18.724 "dma_device_type": 2 00:04:18.724 } 00:04:18.724 ], 00:04:18.724 "driver_specific": { 00:04:18.724 "passthru": { 00:04:18.724 "name": "Passthru0", 00:04:18.724 "base_bdev_name": "Malloc0" 00:04:18.724 } 00:04:18.724 } 00:04:18.724 } 00:04:18.724 ]' 00:04:18.724 14:36:56 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:18.724 14:36:56 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:18.724 14:36:56 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:18.724 14:36:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.724 14:36:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.724 14:36:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.724 14:36:56 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:18.724 14:36:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.724 14:36:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.724 14:36:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.724 14:36:56 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:18.724 14:36:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.724 14:36:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.724 14:36:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.724 14:36:56 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:18.724 14:36:56 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:18.724 14:36:56 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:18.724 00:04:18.724 real 0m0.344s 00:04:18.724 user 0m0.183s 00:04:18.724 sys 0m0.050s 00:04:18.724 14:36:56 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.724 14:36:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.724 ************************************ 00:04:18.724 END TEST rpc_integrity 00:04:18.724 ************************************ 00:04:18.724 14:36:56 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:18.724 14:36:56 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.724 14:36:56 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.724 14:36:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.724 ************************************ 00:04:18.724 START TEST rpc_plugins 00:04:18.724 ************************************ 00:04:18.724 14:36:56 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:18.724 14:36:56 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:18.724 14:36:56 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.724 14:36:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:18.724 14:36:56 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.724 14:36:56 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:18.724 14:36:56 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:18.724 14:36:56 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.724 14:36:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:18.724 14:36:56 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.724 14:36:56 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:18.724 { 00:04:18.724 "name": "Malloc1", 00:04:18.724 "aliases": [ 00:04:18.724 "3de06ac2-a729-49c8-925f-dba81d4a1419" 00:04:18.724 ], 00:04:18.724 "product_name": "Malloc disk", 00:04:18.724 "block_size": 4096, 00:04:18.724 "num_blocks": 256, 00:04:18.724 "uuid": "3de06ac2-a729-49c8-925f-dba81d4a1419", 00:04:18.724 "assigned_rate_limits": { 00:04:18.724 "rw_ios_per_sec": 0, 00:04:18.724 "rw_mbytes_per_sec": 0, 00:04:18.724 "r_mbytes_per_sec": 0, 00:04:18.724 "w_mbytes_per_sec": 0 00:04:18.724 }, 00:04:18.724 "claimed": false, 00:04:18.724 "zoned": false, 00:04:18.724 "supported_io_types": { 00:04:18.724 "read": true, 00:04:18.724 "write": true, 00:04:18.724 "unmap": true, 00:04:18.724 "flush": true, 00:04:18.724 "reset": true, 00:04:18.724 "nvme_admin": false, 00:04:18.724 "nvme_io": false, 00:04:18.724 "nvme_io_md": false, 00:04:18.724 "write_zeroes": true, 00:04:18.724 "zcopy": true, 00:04:18.724 "get_zone_info": false, 00:04:18.724 "zone_management": false, 00:04:18.724 "zone_append": false, 00:04:18.724 "compare": false, 00:04:18.724 "compare_and_write": false, 00:04:18.724 "abort": true, 00:04:18.724 "seek_hole": false, 00:04:18.724 "seek_data": false, 00:04:18.724 "copy": true, 00:04:18.724 "nvme_iov_md": false 00:04:18.724 }, 00:04:18.724 "memory_domains": [ 00:04:18.724 { 00:04:18.724 "dma_device_id": "system", 00:04:18.724 "dma_device_type": 1 00:04:18.724 }, 00:04:18.724 { 00:04:18.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:18.724 "dma_device_type": 2 00:04:18.724 } 00:04:18.724 ], 00:04:18.724 "driver_specific": {} 00:04:18.724 } 00:04:18.724 ]' 00:04:18.724 14:36:56 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:18.984 14:36:56 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:18.985 14:36:56 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:18.985 14:36:56 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.985 14:36:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:18.985 14:36:56 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.985 14:36:56 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:18.985 14:36:56 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.985 14:36:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:18.985 14:36:56 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.985 14:36:56 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:18.985 14:36:56 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:18.985 14:36:56 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:18.985 00:04:18.985 real 0m0.155s 00:04:18.985 user 0m0.083s 00:04:18.985 sys 0m0.026s 00:04:18.985 14:36:56 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.985 14:36:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:18.985 ************************************ 00:04:18.985 END TEST rpc_plugins 00:04:18.985 ************************************ 00:04:18.985 14:36:57 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:18.985 14:36:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.985 14:36:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.985 14:36:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.985 ************************************ 00:04:18.985 START TEST rpc_trace_cmd_test 00:04:18.985 ************************************ 00:04:18.985 14:36:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:18.985 14:36:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:18.985 14:36:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:18.985 14:36:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.985 14:36:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:18.985 14:36:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.985 14:36:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:18.985 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58160", 00:04:18.985 "tpoint_group_mask": "0x8", 00:04:18.985 "iscsi_conn": { 00:04:18.985 "mask": "0x2", 00:04:18.985 "tpoint_mask": "0x0" 00:04:18.985 }, 00:04:18.985 "scsi": { 00:04:18.985 "mask": "0x4", 00:04:18.985 "tpoint_mask": "0x0" 00:04:18.985 }, 00:04:18.985 "bdev": { 00:04:18.985 "mask": "0x8", 00:04:18.985 "tpoint_mask": "0xffffffffffffffff" 00:04:18.985 }, 00:04:18.985 "nvmf_rdma": { 00:04:18.985 "mask": "0x10", 00:04:18.985 "tpoint_mask": "0x0" 00:04:18.985 }, 00:04:18.985 "nvmf_tcp": { 00:04:18.985 "mask": "0x20", 00:04:18.985 "tpoint_mask": "0x0" 00:04:18.985 }, 00:04:18.985 "ftl": { 00:04:18.985 "mask": "0x40", 00:04:18.985 "tpoint_mask": "0x0" 00:04:18.985 }, 00:04:18.985 "blobfs": { 00:04:18.985 "mask": "0x80", 00:04:18.985 "tpoint_mask": "0x0" 00:04:18.985 }, 00:04:18.985 "dsa": { 00:04:18.985 "mask": "0x200", 00:04:18.985 "tpoint_mask": "0x0" 00:04:18.985 }, 00:04:18.985 "thread": { 00:04:18.985 "mask": "0x400", 00:04:18.985 "tpoint_mask": "0x0" 00:04:18.985 }, 00:04:18.985 "nvme_pcie": { 00:04:18.985 "mask": "0x800", 00:04:18.985 "tpoint_mask": "0x0" 00:04:18.985 }, 00:04:18.985 "iaa": { 00:04:18.985 "mask": "0x1000", 00:04:18.985 "tpoint_mask": "0x0" 00:04:18.985 }, 00:04:18.985 "nvme_tcp": { 00:04:18.985 "mask": "0x2000", 00:04:18.985 "tpoint_mask": "0x0" 00:04:18.985 }, 00:04:18.985 "bdev_nvme": { 00:04:18.985 "mask": "0x4000", 00:04:18.985 "tpoint_mask": "0x0" 00:04:18.985 }, 00:04:18.985 "sock": { 00:04:18.985 "mask": "0x8000", 00:04:18.985 "tpoint_mask": "0x0" 00:04:18.985 }, 00:04:18.985 "blob": { 00:04:18.985 "mask": "0x10000", 00:04:18.985 "tpoint_mask": "0x0" 00:04:18.985 }, 00:04:18.985 "bdev_raid": { 00:04:18.985 "mask": "0x20000", 00:04:18.985 "tpoint_mask": "0x0" 00:04:18.985 }, 00:04:18.985 "scheduler": { 00:04:18.985 "mask": "0x40000", 00:04:18.985 "tpoint_mask": "0x0" 00:04:18.985 } 00:04:18.985 }' 00:04:18.985 14:36:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:18.985 14:36:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:18.985 14:36:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:19.244 14:36:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:19.244 14:36:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:19.244 14:36:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:19.244 14:36:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:19.244 14:36:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:19.244 14:36:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:19.244 14:36:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:19.244 00:04:19.244 real 0m0.262s 00:04:19.244 user 0m0.213s 00:04:19.244 sys 0m0.042s 00:04:19.244 14:36:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.244 14:36:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:19.244 ************************************ 00:04:19.244 END TEST rpc_trace_cmd_test 00:04:19.244 ************************************ 00:04:19.244 14:36:57 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:19.244 14:36:57 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:19.244 14:36:57 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:19.244 14:36:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.244 14:36:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.244 14:36:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.244 ************************************ 00:04:19.244 START TEST rpc_daemon_integrity 00:04:19.244 ************************************ 00:04:19.244 14:36:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:19.244 14:36:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:19.244 14:36:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.244 14:36:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.244 14:36:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.244 14:36:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:19.244 14:36:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:19.503 14:36:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:19.503 14:36:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:19.503 14:36:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.503 14:36:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.503 14:36:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.503 14:36:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:19.503 14:36:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:19.503 14:36:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.503 14:36:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.503 14:36:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.503 14:36:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:19.503 { 00:04:19.503 "name": "Malloc2", 00:04:19.503 "aliases": [ 00:04:19.503 "6e492691-3ecc-4541-9b77-d9db7db38f7b" 00:04:19.503 ], 00:04:19.503 "product_name": "Malloc disk", 00:04:19.503 "block_size": 512, 00:04:19.503 "num_blocks": 16384, 00:04:19.503 "uuid": "6e492691-3ecc-4541-9b77-d9db7db38f7b", 00:04:19.503 "assigned_rate_limits": { 00:04:19.503 "rw_ios_per_sec": 0, 00:04:19.503 "rw_mbytes_per_sec": 0, 00:04:19.503 "r_mbytes_per_sec": 0, 00:04:19.503 "w_mbytes_per_sec": 0 00:04:19.503 }, 00:04:19.503 "claimed": false, 00:04:19.503 "zoned": false, 00:04:19.503 "supported_io_types": { 00:04:19.503 "read": true, 00:04:19.503 "write": true, 00:04:19.503 "unmap": true, 00:04:19.503 "flush": true, 00:04:19.503 "reset": true, 00:04:19.503 "nvme_admin": false, 00:04:19.503 "nvme_io": false, 00:04:19.503 "nvme_io_md": false, 00:04:19.503 "write_zeroes": true, 00:04:19.503 "zcopy": true, 00:04:19.503 "get_zone_info": false, 00:04:19.503 "zone_management": false, 00:04:19.503 "zone_append": false, 00:04:19.503 "compare": false, 00:04:19.503 "compare_and_write": false, 00:04:19.503 "abort": true, 00:04:19.503 "seek_hole": false, 00:04:19.503 "seek_data": false, 00:04:19.503 "copy": true, 00:04:19.503 "nvme_iov_md": false 00:04:19.503 }, 00:04:19.503 "memory_domains": [ 00:04:19.503 { 00:04:19.503 "dma_device_id": "system", 00:04:19.503 "dma_device_type": 1 00:04:19.503 }, 00:04:19.503 { 00:04:19.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.503 "dma_device_type": 2 00:04:19.503 } 00:04:19.503 ], 00:04:19.504 "driver_specific": {} 00:04:19.504 } 00:04:19.504 ]' 00:04:19.504 14:36:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:19.504 14:36:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:19.504 14:36:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:19.504 14:36:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.504 14:36:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.504 [2024-12-09 14:36:57.484594] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:19.504 [2024-12-09 14:36:57.484685] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:19.504 [2024-12-09 14:36:57.484713] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:19.504 [2024-12-09 14:36:57.484726] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:19.504 [2024-12-09 14:36:57.487488] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:19.504 [2024-12-09 14:36:57.487562] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:19.504 Passthru0 00:04:19.504 14:36:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.504 14:36:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:19.504 14:36:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.504 14:36:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.504 14:36:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.504 14:36:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:19.504 { 00:04:19.504 "name": "Malloc2", 00:04:19.504 "aliases": [ 00:04:19.504 "6e492691-3ecc-4541-9b77-d9db7db38f7b" 00:04:19.504 ], 00:04:19.504 "product_name": "Malloc disk", 00:04:19.504 "block_size": 512, 00:04:19.504 "num_blocks": 16384, 00:04:19.504 "uuid": "6e492691-3ecc-4541-9b77-d9db7db38f7b", 00:04:19.504 "assigned_rate_limits": { 00:04:19.504 "rw_ios_per_sec": 0, 00:04:19.504 "rw_mbytes_per_sec": 0, 00:04:19.504 "r_mbytes_per_sec": 0, 00:04:19.504 "w_mbytes_per_sec": 0 00:04:19.504 }, 00:04:19.504 "claimed": true, 00:04:19.504 "claim_type": "exclusive_write", 00:04:19.504 "zoned": false, 00:04:19.504 "supported_io_types": { 00:04:19.504 "read": true, 00:04:19.504 "write": true, 00:04:19.504 "unmap": true, 00:04:19.504 "flush": true, 00:04:19.504 "reset": true, 00:04:19.504 "nvme_admin": false, 00:04:19.504 "nvme_io": false, 00:04:19.504 "nvme_io_md": false, 00:04:19.504 "write_zeroes": true, 00:04:19.504 "zcopy": true, 00:04:19.504 "get_zone_info": false, 00:04:19.504 "zone_management": false, 00:04:19.504 "zone_append": false, 00:04:19.504 "compare": false, 00:04:19.504 "compare_and_write": false, 00:04:19.504 "abort": true, 00:04:19.504 "seek_hole": false, 00:04:19.504 "seek_data": false, 00:04:19.504 "copy": true, 00:04:19.504 "nvme_iov_md": false 00:04:19.504 }, 00:04:19.504 "memory_domains": [ 00:04:19.504 { 00:04:19.504 "dma_device_id": "system", 00:04:19.504 "dma_device_type": 1 00:04:19.504 }, 00:04:19.504 { 00:04:19.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.504 "dma_device_type": 2 00:04:19.504 } 00:04:19.504 ], 00:04:19.504 "driver_specific": {} 00:04:19.504 }, 00:04:19.504 { 00:04:19.504 "name": "Passthru0", 00:04:19.504 "aliases": [ 00:04:19.504 "34d5781a-c63b-52b1-8f5c-1b7cd7cb377f" 00:04:19.504 ], 00:04:19.504 "product_name": "passthru", 00:04:19.504 "block_size": 512, 00:04:19.504 "num_blocks": 16384, 00:04:19.504 "uuid": "34d5781a-c63b-52b1-8f5c-1b7cd7cb377f", 00:04:19.504 "assigned_rate_limits": { 00:04:19.504 "rw_ios_per_sec": 0, 00:04:19.504 "rw_mbytes_per_sec": 0, 00:04:19.504 "r_mbytes_per_sec": 0, 00:04:19.504 "w_mbytes_per_sec": 0 00:04:19.504 }, 00:04:19.504 "claimed": false, 00:04:19.504 "zoned": false, 00:04:19.504 "supported_io_types": { 00:04:19.504 "read": true, 00:04:19.504 "write": true, 00:04:19.504 "unmap": true, 00:04:19.504 "flush": true, 00:04:19.504 "reset": true, 00:04:19.504 "nvme_admin": false, 00:04:19.504 "nvme_io": false, 00:04:19.504 "nvme_io_md": false, 00:04:19.504 "write_zeroes": true, 00:04:19.504 "zcopy": true, 00:04:19.504 "get_zone_info": false, 00:04:19.504 "zone_management": false, 00:04:19.504 "zone_append": false, 00:04:19.504 "compare": false, 00:04:19.504 "compare_and_write": false, 00:04:19.504 "abort": true, 00:04:19.504 "seek_hole": false, 00:04:19.504 "seek_data": false, 00:04:19.504 "copy": true, 00:04:19.504 "nvme_iov_md": false 00:04:19.504 }, 00:04:19.504 "memory_domains": [ 00:04:19.504 { 00:04:19.504 "dma_device_id": "system", 00:04:19.504 "dma_device_type": 1 00:04:19.504 }, 00:04:19.504 { 00:04:19.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.504 "dma_device_type": 2 00:04:19.504 } 00:04:19.504 ], 00:04:19.504 "driver_specific": { 00:04:19.504 "passthru": { 00:04:19.504 "name": "Passthru0", 00:04:19.504 "base_bdev_name": "Malloc2" 00:04:19.504 } 00:04:19.504 } 00:04:19.504 } 00:04:19.504 ]' 00:04:19.504 14:36:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:19.504 14:36:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:19.504 14:36:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:19.504 14:36:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.504 14:36:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.504 14:36:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.504 14:36:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:19.504 14:36:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.504 14:36:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.504 14:36:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.504 14:36:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:19.504 14:36:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.504 14:36:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.504 14:36:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.504 14:36:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:19.504 14:36:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:19.763 14:36:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:19.763 00:04:19.763 real 0m0.315s 00:04:19.763 user 0m0.176s 00:04:19.763 sys 0m0.037s 00:04:19.763 14:36:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.763 14:36:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.763 ************************************ 00:04:19.763 END TEST rpc_daemon_integrity 00:04:19.763 ************************************ 00:04:19.763 14:36:57 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:19.763 14:36:57 rpc -- rpc/rpc.sh@84 -- # killprocess 58160 00:04:19.763 14:36:57 rpc -- common/autotest_common.sh@954 -- # '[' -z 58160 ']' 00:04:19.763 14:36:57 rpc -- common/autotest_common.sh@958 -- # kill -0 58160 00:04:19.763 14:36:57 rpc -- common/autotest_common.sh@959 -- # uname 00:04:19.763 14:36:57 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:19.763 14:36:57 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58160 00:04:19.763 14:36:57 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:19.763 14:36:57 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:19.763 killing process with pid 58160 00:04:19.763 14:36:57 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58160' 00:04:19.763 14:36:57 rpc -- common/autotest_common.sh@973 -- # kill 58160 00:04:19.763 14:36:57 rpc -- common/autotest_common.sh@978 -- # wait 58160 00:04:23.050 00:04:23.050 real 0m5.777s 00:04:23.050 user 0m6.354s 00:04:23.050 sys 0m0.904s 00:04:23.050 14:37:00 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.050 ************************************ 00:04:23.050 END TEST rpc 00:04:23.050 ************************************ 00:04:23.050 14:37:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.050 14:37:00 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:23.050 14:37:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.050 14:37:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.050 14:37:00 -- common/autotest_common.sh@10 -- # set +x 00:04:23.050 ************************************ 00:04:23.050 START TEST skip_rpc 00:04:23.050 ************************************ 00:04:23.050 14:37:00 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:23.050 * Looking for test storage... 00:04:23.050 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:23.050 14:37:00 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:23.050 14:37:00 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:23.050 14:37:00 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:23.050 14:37:00 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:23.050 14:37:00 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.050 14:37:00 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.050 14:37:00 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.050 14:37:00 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.050 14:37:00 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.050 14:37:00 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.050 14:37:00 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.050 14:37:00 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.050 14:37:00 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.050 14:37:00 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.050 14:37:00 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.050 14:37:00 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:23.050 14:37:00 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:23.050 14:37:00 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.050 14:37:00 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.050 14:37:00 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:23.050 14:37:00 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:23.050 14:37:00 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.050 14:37:00 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:23.050 14:37:00 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.050 14:37:00 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:23.050 14:37:00 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:23.050 14:37:00 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.050 14:37:00 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:23.050 14:37:00 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.050 14:37:00 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.050 14:37:00 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.050 14:37:00 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:23.050 14:37:00 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.050 14:37:00 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:23.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.050 --rc genhtml_branch_coverage=1 00:04:23.050 --rc genhtml_function_coverage=1 00:04:23.050 --rc genhtml_legend=1 00:04:23.050 --rc geninfo_all_blocks=1 00:04:23.050 --rc geninfo_unexecuted_blocks=1 00:04:23.050 00:04:23.050 ' 00:04:23.050 14:37:00 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:23.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.050 --rc genhtml_branch_coverage=1 00:04:23.050 --rc genhtml_function_coverage=1 00:04:23.050 --rc genhtml_legend=1 00:04:23.050 --rc geninfo_all_blocks=1 00:04:23.050 --rc geninfo_unexecuted_blocks=1 00:04:23.050 00:04:23.050 ' 00:04:23.050 14:37:00 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:23.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.050 --rc genhtml_branch_coverage=1 00:04:23.050 --rc genhtml_function_coverage=1 00:04:23.050 --rc genhtml_legend=1 00:04:23.050 --rc geninfo_all_blocks=1 00:04:23.050 --rc geninfo_unexecuted_blocks=1 00:04:23.050 00:04:23.050 ' 00:04:23.050 14:37:00 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:23.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.050 --rc genhtml_branch_coverage=1 00:04:23.050 --rc genhtml_function_coverage=1 00:04:23.050 --rc genhtml_legend=1 00:04:23.050 --rc geninfo_all_blocks=1 00:04:23.050 --rc geninfo_unexecuted_blocks=1 00:04:23.050 00:04:23.050 ' 00:04:23.050 14:37:00 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:23.050 14:37:00 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:23.050 14:37:00 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:23.050 14:37:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.050 14:37:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.050 14:37:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.050 ************************************ 00:04:23.050 START TEST skip_rpc 00:04:23.050 ************************************ 00:04:23.050 14:37:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:23.050 14:37:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58389 00:04:23.050 14:37:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:23.050 14:37:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:23.050 14:37:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:23.050 [2024-12-09 14:37:00.896709] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:04:23.050 [2024-12-09 14:37:00.896838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58389 ] 00:04:23.050 [2024-12-09 14:37:01.077089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.341 [2024-12-09 14:37:01.210209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.628 14:37:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:28.628 14:37:05 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:28.628 14:37:05 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:28.628 14:37:05 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:28.628 14:37:05 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:28.628 14:37:05 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:28.628 14:37:05 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:28.628 14:37:05 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:28.628 14:37:05 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.628 14:37:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.628 14:37:05 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:28.628 14:37:05 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:28.628 14:37:05 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:28.628 14:37:05 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:28.628 14:37:05 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:28.628 14:37:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:28.628 14:37:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58389 00:04:28.628 14:37:05 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58389 ']' 00:04:28.628 14:37:05 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58389 00:04:28.628 14:37:05 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:28.628 14:37:05 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:28.628 14:37:05 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58389 00:04:28.628 14:37:05 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:28.628 14:37:05 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:28.628 killing process with pid 58389 00:04:28.628 14:37:05 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58389' 00:04:28.628 14:37:05 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58389 00:04:28.628 14:37:05 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58389 00:04:30.532 00:04:30.532 real 0m7.716s 00:04:30.532 user 0m7.224s 00:04:30.532 sys 0m0.399s 00:04:30.532 14:37:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.532 14:37:08 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.532 ************************************ 00:04:30.532 END TEST skip_rpc 00:04:30.532 ************************************ 00:04:30.532 14:37:08 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:30.532 14:37:08 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.532 14:37:08 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.532 14:37:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.532 ************************************ 00:04:30.532 START TEST skip_rpc_with_json 00:04:30.532 ************************************ 00:04:30.532 14:37:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:30.532 14:37:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:30.532 14:37:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58504 00:04:30.532 14:37:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:30.532 14:37:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:30.532 14:37:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58504 00:04:30.532 14:37:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58504 ']' 00:04:30.532 14:37:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.532 14:37:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.532 14:37:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.532 14:37:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.532 14:37:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.791 [2024-12-09 14:37:08.657630] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:04:30.791 [2024-12-09 14:37:08.657762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58504 ] 00:04:30.791 [2024-12-09 14:37:08.835397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.050 [2024-12-09 14:37:08.963275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.991 14:37:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.991 14:37:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:31.991 14:37:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:31.991 14:37:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.991 14:37:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:31.991 [2024-12-09 14:37:09.897392] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:31.991 request: 00:04:31.991 { 00:04:31.991 "trtype": "tcp", 00:04:31.991 "method": "nvmf_get_transports", 00:04:31.991 "req_id": 1 00:04:31.991 } 00:04:31.991 Got JSON-RPC error response 00:04:31.991 response: 00:04:31.991 { 00:04:31.991 "code": -19, 00:04:31.991 "message": "No such device" 00:04:31.991 } 00:04:31.991 14:37:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:31.991 14:37:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:31.991 14:37:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.991 14:37:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:31.991 [2024-12-09 14:37:09.909517] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:31.991 14:37:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.991 14:37:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:31.991 14:37:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.991 14:37:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:31.991 14:37:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.992 14:37:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:31.992 { 00:04:31.992 "subsystems": [ 00:04:31.992 { 00:04:31.992 "subsystem": "fsdev", 00:04:31.992 "config": [ 00:04:31.992 { 00:04:31.992 "method": "fsdev_set_opts", 00:04:31.992 "params": { 00:04:31.992 "fsdev_io_pool_size": 65535, 00:04:31.992 "fsdev_io_cache_size": 256 00:04:31.992 } 00:04:31.992 } 00:04:31.992 ] 00:04:31.992 }, 00:04:31.992 { 00:04:31.992 "subsystem": "keyring", 00:04:31.992 "config": [] 00:04:31.992 }, 00:04:31.992 { 00:04:31.992 "subsystem": "iobuf", 00:04:31.992 "config": [ 00:04:31.992 { 00:04:31.992 "method": "iobuf_set_options", 00:04:31.992 "params": { 00:04:31.992 "small_pool_count": 8192, 00:04:31.992 "large_pool_count": 1024, 00:04:31.992 "small_bufsize": 8192, 00:04:31.992 "large_bufsize": 135168, 00:04:31.992 "enable_numa": false 00:04:31.992 } 00:04:31.992 } 00:04:31.992 ] 00:04:31.992 }, 00:04:31.992 { 00:04:31.992 "subsystem": "sock", 00:04:31.992 "config": [ 00:04:31.992 { 00:04:31.992 "method": "sock_set_default_impl", 00:04:31.992 "params": { 00:04:31.992 "impl_name": "posix" 00:04:31.992 } 00:04:31.992 }, 00:04:31.992 { 00:04:31.992 "method": "sock_impl_set_options", 00:04:31.992 "params": { 00:04:31.992 "impl_name": "ssl", 00:04:31.992 "recv_buf_size": 4096, 00:04:31.992 "send_buf_size": 4096, 00:04:31.992 "enable_recv_pipe": true, 00:04:31.992 "enable_quickack": false, 00:04:31.992 "enable_placement_id": 0, 00:04:31.992 "enable_zerocopy_send_server": true, 00:04:31.992 "enable_zerocopy_send_client": false, 00:04:31.992 "zerocopy_threshold": 0, 00:04:31.992 "tls_version": 0, 00:04:31.992 "enable_ktls": false 00:04:31.992 } 00:04:31.992 }, 00:04:31.992 { 00:04:31.992 "method": "sock_impl_set_options", 00:04:31.992 "params": { 00:04:31.992 "impl_name": "posix", 00:04:31.992 "recv_buf_size": 2097152, 00:04:31.992 "send_buf_size": 2097152, 00:04:31.992 "enable_recv_pipe": true, 00:04:31.992 "enable_quickack": false, 00:04:31.992 "enable_placement_id": 0, 00:04:31.992 "enable_zerocopy_send_server": true, 00:04:31.992 "enable_zerocopy_send_client": false, 00:04:31.992 "zerocopy_threshold": 0, 00:04:31.992 "tls_version": 0, 00:04:31.992 "enable_ktls": false 00:04:31.992 } 00:04:31.992 } 00:04:31.992 ] 00:04:31.992 }, 00:04:31.992 { 00:04:31.992 "subsystem": "vmd", 00:04:31.992 "config": [] 00:04:31.992 }, 00:04:31.992 { 00:04:31.992 "subsystem": "accel", 00:04:31.992 "config": [ 00:04:31.992 { 00:04:31.992 "method": "accel_set_options", 00:04:31.992 "params": { 00:04:31.992 "small_cache_size": 128, 00:04:31.992 "large_cache_size": 16, 00:04:31.992 "task_count": 2048, 00:04:31.992 "sequence_count": 2048, 00:04:31.992 "buf_count": 2048 00:04:31.992 } 00:04:31.992 } 00:04:31.992 ] 00:04:31.992 }, 00:04:31.992 { 00:04:31.992 "subsystem": "bdev", 00:04:31.992 "config": [ 00:04:31.992 { 00:04:31.992 "method": "bdev_set_options", 00:04:31.992 "params": { 00:04:31.992 "bdev_io_pool_size": 65535, 00:04:31.992 "bdev_io_cache_size": 256, 00:04:31.992 "bdev_auto_examine": true, 00:04:31.992 "iobuf_small_cache_size": 128, 00:04:31.992 "iobuf_large_cache_size": 16 00:04:31.992 } 00:04:31.992 }, 00:04:31.992 { 00:04:31.992 "method": "bdev_raid_set_options", 00:04:31.992 "params": { 00:04:31.992 "process_window_size_kb": 1024, 00:04:31.992 "process_max_bandwidth_mb_sec": 0 00:04:31.992 } 00:04:31.992 }, 00:04:31.992 { 00:04:31.992 "method": "bdev_iscsi_set_options", 00:04:31.992 "params": { 00:04:31.992 "timeout_sec": 30 00:04:31.992 } 00:04:31.992 }, 00:04:31.992 { 00:04:31.992 "method": "bdev_nvme_set_options", 00:04:31.992 "params": { 00:04:31.992 "action_on_timeout": "none", 00:04:31.992 "timeout_us": 0, 00:04:31.992 "timeout_admin_us": 0, 00:04:31.992 "keep_alive_timeout_ms": 10000, 00:04:31.992 "arbitration_burst": 0, 00:04:31.992 "low_priority_weight": 0, 00:04:31.992 "medium_priority_weight": 0, 00:04:31.992 "high_priority_weight": 0, 00:04:31.992 "nvme_adminq_poll_period_us": 10000, 00:04:31.992 "nvme_ioq_poll_period_us": 0, 00:04:31.992 "io_queue_requests": 0, 00:04:31.992 "delay_cmd_submit": true, 00:04:31.992 "transport_retry_count": 4, 00:04:31.992 "bdev_retry_count": 3, 00:04:31.992 "transport_ack_timeout": 0, 00:04:31.992 "ctrlr_loss_timeout_sec": 0, 00:04:31.992 "reconnect_delay_sec": 0, 00:04:31.992 "fast_io_fail_timeout_sec": 0, 00:04:31.992 "disable_auto_failback": false, 00:04:31.992 "generate_uuids": false, 00:04:31.992 "transport_tos": 0, 00:04:31.992 "nvme_error_stat": false, 00:04:31.992 "rdma_srq_size": 0, 00:04:31.992 "io_path_stat": false, 00:04:31.992 "allow_accel_sequence": false, 00:04:31.992 "rdma_max_cq_size": 0, 00:04:31.992 "rdma_cm_event_timeout_ms": 0, 00:04:31.992 "dhchap_digests": [ 00:04:31.992 "sha256", 00:04:31.992 "sha384", 00:04:31.992 "sha512" 00:04:31.992 ], 00:04:31.992 "dhchap_dhgroups": [ 00:04:31.992 "null", 00:04:31.992 "ffdhe2048", 00:04:31.992 "ffdhe3072", 00:04:31.992 "ffdhe4096", 00:04:31.992 "ffdhe6144", 00:04:31.992 "ffdhe8192" 00:04:31.992 ] 00:04:31.992 } 00:04:31.992 }, 00:04:31.992 { 00:04:31.992 "method": "bdev_nvme_set_hotplug", 00:04:31.992 "params": { 00:04:31.992 "period_us": 100000, 00:04:31.992 "enable": false 00:04:31.992 } 00:04:31.992 }, 00:04:31.992 { 00:04:31.992 "method": "bdev_wait_for_examine" 00:04:31.992 } 00:04:31.992 ] 00:04:31.992 }, 00:04:31.992 { 00:04:31.992 "subsystem": "scsi", 00:04:31.992 "config": null 00:04:31.992 }, 00:04:31.992 { 00:04:31.992 "subsystem": "scheduler", 00:04:31.992 "config": [ 00:04:31.992 { 00:04:31.992 "method": "framework_set_scheduler", 00:04:31.992 "params": { 00:04:31.992 "name": "static" 00:04:31.992 } 00:04:31.992 } 00:04:31.992 ] 00:04:31.992 }, 00:04:31.992 { 00:04:31.992 "subsystem": "vhost_scsi", 00:04:31.992 "config": [] 00:04:31.992 }, 00:04:31.992 { 00:04:31.992 "subsystem": "vhost_blk", 00:04:31.992 "config": [] 00:04:31.992 }, 00:04:31.992 { 00:04:31.992 "subsystem": "ublk", 00:04:31.992 "config": [] 00:04:31.992 }, 00:04:31.992 { 00:04:31.992 "subsystem": "nbd", 00:04:31.992 "config": [] 00:04:31.992 }, 00:04:31.992 { 00:04:31.992 "subsystem": "nvmf", 00:04:31.992 "config": [ 00:04:31.992 { 00:04:31.992 "method": "nvmf_set_config", 00:04:31.992 "params": { 00:04:31.992 "discovery_filter": "match_any", 00:04:31.992 "admin_cmd_passthru": { 00:04:31.992 "identify_ctrlr": false 00:04:31.992 }, 00:04:31.992 "dhchap_digests": [ 00:04:31.992 "sha256", 00:04:31.992 "sha384", 00:04:31.992 "sha512" 00:04:31.992 ], 00:04:31.992 "dhchap_dhgroups": [ 00:04:31.992 "null", 00:04:31.992 "ffdhe2048", 00:04:31.992 "ffdhe3072", 00:04:31.992 "ffdhe4096", 00:04:31.992 "ffdhe6144", 00:04:31.992 "ffdhe8192" 00:04:31.992 ] 00:04:31.992 } 00:04:31.992 }, 00:04:31.992 { 00:04:31.992 "method": "nvmf_set_max_subsystems", 00:04:31.992 "params": { 00:04:31.992 "max_subsystems": 1024 00:04:31.992 } 00:04:31.992 }, 00:04:31.992 { 00:04:31.992 "method": "nvmf_set_crdt", 00:04:31.992 "params": { 00:04:31.992 "crdt1": 0, 00:04:31.992 "crdt2": 0, 00:04:31.992 "crdt3": 0 00:04:31.992 } 00:04:31.992 }, 00:04:31.992 { 00:04:31.992 "method": "nvmf_create_transport", 00:04:31.992 "params": { 00:04:31.992 "trtype": "TCP", 00:04:31.992 "max_queue_depth": 128, 00:04:31.992 "max_io_qpairs_per_ctrlr": 127, 00:04:31.992 "in_capsule_data_size": 4096, 00:04:31.992 "max_io_size": 131072, 00:04:31.992 "io_unit_size": 131072, 00:04:31.992 "max_aq_depth": 128, 00:04:31.992 "num_shared_buffers": 511, 00:04:31.992 "buf_cache_size": 4294967295, 00:04:31.992 "dif_insert_or_strip": false, 00:04:31.992 "zcopy": false, 00:04:31.992 "c2h_success": true, 00:04:31.992 "sock_priority": 0, 00:04:31.992 "abort_timeout_sec": 1, 00:04:31.992 "ack_timeout": 0, 00:04:31.992 "data_wr_pool_size": 0 00:04:31.992 } 00:04:31.992 } 00:04:31.992 ] 00:04:31.992 }, 00:04:31.992 { 00:04:31.992 "subsystem": "iscsi", 00:04:31.992 "config": [ 00:04:31.992 { 00:04:31.992 "method": "iscsi_set_options", 00:04:31.992 "params": { 00:04:31.992 "node_base": "iqn.2016-06.io.spdk", 00:04:31.992 "max_sessions": 128, 00:04:31.992 "max_connections_per_session": 2, 00:04:31.992 "max_queue_depth": 64, 00:04:31.992 "default_time2wait": 2, 00:04:31.992 "default_time2retain": 20, 00:04:31.992 "first_burst_length": 8192, 00:04:31.992 "immediate_data": true, 00:04:31.992 "allow_duplicated_isid": false, 00:04:31.992 "error_recovery_level": 0, 00:04:31.992 "nop_timeout": 60, 00:04:31.992 "nop_in_interval": 30, 00:04:31.992 "disable_chap": false, 00:04:31.992 "require_chap": false, 00:04:31.992 "mutual_chap": false, 00:04:31.992 "chap_group": 0, 00:04:31.992 "max_large_datain_per_connection": 64, 00:04:31.992 "max_r2t_per_connection": 4, 00:04:31.992 "pdu_pool_size": 36864, 00:04:31.992 "immediate_data_pool_size": 16384, 00:04:31.992 "data_out_pool_size": 2048 00:04:31.992 } 00:04:31.992 } 00:04:31.992 ] 00:04:31.992 } 00:04:31.992 ] 00:04:31.992 } 00:04:31.993 14:37:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:31.993 14:37:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58504 00:04:31.993 14:37:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58504 ']' 00:04:31.993 14:37:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58504 00:04:31.993 14:37:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:31.993 14:37:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:31.993 14:37:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58504 00:04:32.252 14:37:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:32.252 14:37:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:32.252 killing process with pid 58504 00:04:32.252 14:37:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58504' 00:04:32.252 14:37:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58504 00:04:32.252 14:37:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58504 00:04:35.539 14:37:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:35.539 14:37:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58560 00:04:35.539 14:37:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:40.812 14:37:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58560 00:04:40.812 14:37:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58560 ']' 00:04:40.812 14:37:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58560 00:04:40.812 14:37:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:40.812 14:37:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.812 14:37:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58560 00:04:40.812 14:37:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.812 14:37:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.812 killing process with pid 58560 00:04:40.812 14:37:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58560' 00:04:40.812 14:37:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58560 00:04:40.812 14:37:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58560 00:04:42.719 14:37:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:42.719 14:37:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:42.719 00:04:42.719 real 0m12.013s 00:04:42.719 user 0m11.458s 00:04:42.719 sys 0m0.862s 00:04:42.719 14:37:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.719 14:37:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.719 ************************************ 00:04:42.719 END TEST skip_rpc_with_json 00:04:42.719 ************************************ 00:04:42.719 14:37:20 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:42.719 14:37:20 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.719 14:37:20 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.719 14:37:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.719 ************************************ 00:04:42.719 START TEST skip_rpc_with_delay 00:04:42.719 ************************************ 00:04:42.719 14:37:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:42.719 14:37:20 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:42.719 14:37:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:42.719 14:37:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:42.719 14:37:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:42.719 14:37:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:42.719 14:37:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:42.719 14:37:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:42.719 14:37:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:42.719 14:37:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:42.719 14:37:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:42.719 14:37:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:42.719 14:37:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:42.719 [2024-12-09 14:37:20.733693] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:42.720 14:37:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:42.720 14:37:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:42.720 14:37:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:42.720 14:37:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:42.720 00:04:42.720 real 0m0.178s 00:04:42.720 user 0m0.090s 00:04:42.720 sys 0m0.086s 00:04:42.720 14:37:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.720 14:37:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:42.720 ************************************ 00:04:42.720 END TEST skip_rpc_with_delay 00:04:42.720 ************************************ 00:04:42.979 14:37:20 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:42.979 14:37:20 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:42.979 14:37:20 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:42.979 14:37:20 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.979 14:37:20 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.979 14:37:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.979 ************************************ 00:04:42.979 START TEST exit_on_failed_rpc_init 00:04:42.979 ************************************ 00:04:42.979 14:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:42.979 14:37:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58699 00:04:42.979 14:37:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:42.979 14:37:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58699 00:04:42.979 14:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58699 ']' 00:04:42.979 14:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.979 14:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.979 14:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.979 14:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.979 14:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:42.979 [2024-12-09 14:37:20.975431] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:04:42.979 [2024-12-09 14:37:20.975592] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58699 ] 00:04:43.239 [2024-12-09 14:37:21.151457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.239 [2024-12-09 14:37:21.278917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.181 14:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.181 14:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:44.181 14:37:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:44.181 14:37:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:44.181 14:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:44.181 14:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:44.181 14:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:44.181 14:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:44.181 14:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:44.181 14:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:44.181 14:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:44.181 14:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:44.181 14:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:44.181 14:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:44.181 14:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:44.441 [2024-12-09 14:37:22.348218] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:04:44.441 [2024-12-09 14:37:22.348351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58717 ] 00:04:44.441 [2024-12-09 14:37:22.518584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.701 [2024-12-09 14:37:22.663260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:44.701 [2024-12-09 14:37:22.663372] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:44.701 [2024-12-09 14:37:22.663387] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:44.701 [2024-12-09 14:37:22.663400] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:44.960 14:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:44.960 14:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:44.960 14:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:44.960 14:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:44.960 14:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:44.960 14:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:44.960 14:37:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:44.960 14:37:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58699 00:04:44.960 14:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58699 ']' 00:04:44.960 14:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58699 00:04:44.960 14:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:44.960 14:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:44.961 14:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58699 00:04:44.961 14:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:44.961 14:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:44.961 killing process with pid 58699 00:04:44.961 14:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58699' 00:04:44.961 14:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58699 00:04:44.961 14:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58699 00:04:48.256 00:04:48.256 real 0m4.916s 00:04:48.256 user 0m5.357s 00:04:48.256 sys 0m0.586s 00:04:48.256 14:37:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.256 14:37:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:48.256 ************************************ 00:04:48.256 END TEST exit_on_failed_rpc_init 00:04:48.256 ************************************ 00:04:48.256 14:37:25 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:48.256 ************************************ 00:04:48.256 END TEST skip_rpc 00:04:48.256 ************************************ 00:04:48.256 00:04:48.256 real 0m25.265s 00:04:48.256 user 0m24.327s 00:04:48.256 sys 0m2.193s 00:04:48.256 14:37:25 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.256 14:37:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.256 14:37:25 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:48.256 14:37:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.256 14:37:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.256 14:37:25 -- common/autotest_common.sh@10 -- # set +x 00:04:48.256 ************************************ 00:04:48.256 START TEST rpc_client 00:04:48.256 ************************************ 00:04:48.256 14:37:25 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:48.256 * Looking for test storage... 00:04:48.256 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:48.256 14:37:25 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:48.256 14:37:25 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:48.256 14:37:25 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:48.256 14:37:26 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:48.256 14:37:26 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.256 14:37:26 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.256 14:37:26 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.256 14:37:26 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.256 14:37:26 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.256 14:37:26 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.256 14:37:26 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.256 14:37:26 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.256 14:37:26 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.256 14:37:26 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.256 14:37:26 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.256 14:37:26 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:48.256 14:37:26 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:48.256 14:37:26 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.256 14:37:26 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.256 14:37:26 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:48.256 14:37:26 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:48.256 14:37:26 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.256 14:37:26 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:48.256 14:37:26 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.256 14:37:26 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:48.256 14:37:26 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:48.256 14:37:26 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.256 14:37:26 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:48.256 14:37:26 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.256 14:37:26 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.256 14:37:26 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.256 14:37:26 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:48.256 14:37:26 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.256 14:37:26 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:48.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.256 --rc genhtml_branch_coverage=1 00:04:48.256 --rc genhtml_function_coverage=1 00:04:48.256 --rc genhtml_legend=1 00:04:48.256 --rc geninfo_all_blocks=1 00:04:48.256 --rc geninfo_unexecuted_blocks=1 00:04:48.256 00:04:48.256 ' 00:04:48.256 14:37:26 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:48.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.256 --rc genhtml_branch_coverage=1 00:04:48.256 --rc genhtml_function_coverage=1 00:04:48.256 --rc genhtml_legend=1 00:04:48.256 --rc geninfo_all_blocks=1 00:04:48.256 --rc geninfo_unexecuted_blocks=1 00:04:48.256 00:04:48.256 ' 00:04:48.256 14:37:26 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:48.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.256 --rc genhtml_branch_coverage=1 00:04:48.256 --rc genhtml_function_coverage=1 00:04:48.256 --rc genhtml_legend=1 00:04:48.256 --rc geninfo_all_blocks=1 00:04:48.256 --rc geninfo_unexecuted_blocks=1 00:04:48.256 00:04:48.256 ' 00:04:48.256 14:37:26 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:48.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.256 --rc genhtml_branch_coverage=1 00:04:48.256 --rc genhtml_function_coverage=1 00:04:48.256 --rc genhtml_legend=1 00:04:48.256 --rc geninfo_all_blocks=1 00:04:48.256 --rc geninfo_unexecuted_blocks=1 00:04:48.256 00:04:48.256 ' 00:04:48.256 14:37:26 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:48.256 OK 00:04:48.256 14:37:26 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:48.256 00:04:48.256 real 0m0.271s 00:04:48.256 user 0m0.145s 00:04:48.256 sys 0m0.145s 00:04:48.256 14:37:26 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.256 14:37:26 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:48.256 ************************************ 00:04:48.256 END TEST rpc_client 00:04:48.256 ************************************ 00:04:48.256 14:37:26 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:48.256 14:37:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.256 14:37:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.256 14:37:26 -- common/autotest_common.sh@10 -- # set +x 00:04:48.256 ************************************ 00:04:48.256 START TEST json_config 00:04:48.256 ************************************ 00:04:48.256 14:37:26 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:48.256 14:37:26 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:48.256 14:37:26 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:48.256 14:37:26 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:48.256 14:37:26 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:48.256 14:37:26 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.256 14:37:26 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.256 14:37:26 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.256 14:37:26 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.256 14:37:26 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.256 14:37:26 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.256 14:37:26 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.256 14:37:26 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.256 14:37:26 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.256 14:37:26 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.256 14:37:26 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.256 14:37:26 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:48.256 14:37:26 json_config -- scripts/common.sh@345 -- # : 1 00:04:48.256 14:37:26 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.256 14:37:26 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.257 14:37:26 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:48.257 14:37:26 json_config -- scripts/common.sh@353 -- # local d=1 00:04:48.257 14:37:26 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.257 14:37:26 json_config -- scripts/common.sh@355 -- # echo 1 00:04:48.257 14:37:26 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.257 14:37:26 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:48.257 14:37:26 json_config -- scripts/common.sh@353 -- # local d=2 00:04:48.257 14:37:26 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.257 14:37:26 json_config -- scripts/common.sh@355 -- # echo 2 00:04:48.257 14:37:26 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.257 14:37:26 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.257 14:37:26 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.257 14:37:26 json_config -- scripts/common.sh@368 -- # return 0 00:04:48.257 14:37:26 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.257 14:37:26 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:48.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.257 --rc genhtml_branch_coverage=1 00:04:48.257 --rc genhtml_function_coverage=1 00:04:48.257 --rc genhtml_legend=1 00:04:48.257 --rc geninfo_all_blocks=1 00:04:48.257 --rc geninfo_unexecuted_blocks=1 00:04:48.257 00:04:48.257 ' 00:04:48.257 14:37:26 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:48.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.257 --rc genhtml_branch_coverage=1 00:04:48.257 --rc genhtml_function_coverage=1 00:04:48.257 --rc genhtml_legend=1 00:04:48.257 --rc geninfo_all_blocks=1 00:04:48.257 --rc geninfo_unexecuted_blocks=1 00:04:48.257 00:04:48.257 ' 00:04:48.257 14:37:26 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:48.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.257 --rc genhtml_branch_coverage=1 00:04:48.257 --rc genhtml_function_coverage=1 00:04:48.257 --rc genhtml_legend=1 00:04:48.257 --rc geninfo_all_blocks=1 00:04:48.257 --rc geninfo_unexecuted_blocks=1 00:04:48.257 00:04:48.257 ' 00:04:48.257 14:37:26 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:48.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.257 --rc genhtml_branch_coverage=1 00:04:48.257 --rc genhtml_function_coverage=1 00:04:48.257 --rc genhtml_legend=1 00:04:48.257 --rc geninfo_all_blocks=1 00:04:48.257 --rc geninfo_unexecuted_blocks=1 00:04:48.257 00:04:48.257 ' 00:04:48.257 14:37:26 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:48.257 14:37:26 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:48.257 14:37:26 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:48.257 14:37:26 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:48.257 14:37:26 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:48.257 14:37:26 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:48.257 14:37:26 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:48.257 14:37:26 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:48.257 14:37:26 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:48.257 14:37:26 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:48.257 14:37:26 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:48.257 14:37:26 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:48.517 14:37:26 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77e609b4-a18d-4719-b7e6-68133c864077 00:04:48.517 14:37:26 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=77e609b4-a18d-4719-b7e6-68133c864077 00:04:48.517 14:37:26 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:48.517 14:37:26 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:48.517 14:37:26 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:48.517 14:37:26 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:48.517 14:37:26 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:48.517 14:37:26 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:48.517 14:37:26 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:48.517 14:37:26 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:48.517 14:37:26 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:48.517 14:37:26 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.517 14:37:26 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.517 14:37:26 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.517 14:37:26 json_config -- paths/export.sh@5 -- # export PATH 00:04:48.517 14:37:26 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.517 14:37:26 json_config -- nvmf/common.sh@51 -- # : 0 00:04:48.517 14:37:26 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:48.517 14:37:26 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:48.517 14:37:26 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:48.517 14:37:26 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:48.517 14:37:26 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:48.517 14:37:26 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:48.517 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:48.517 14:37:26 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:48.517 14:37:26 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:48.517 14:37:26 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:48.517 14:37:26 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:48.517 14:37:26 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:48.517 14:37:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:48.517 14:37:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:48.517 14:37:26 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:48.517 WARNING: No tests are enabled so not running JSON configuration tests 00:04:48.517 14:37:26 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:48.517 14:37:26 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:48.517 00:04:48.517 real 0m0.187s 00:04:48.517 user 0m0.104s 00:04:48.517 sys 0m0.089s 00:04:48.517 14:37:26 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.517 14:37:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.517 ************************************ 00:04:48.517 END TEST json_config 00:04:48.517 ************************************ 00:04:48.517 14:37:26 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:48.517 14:37:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.517 14:37:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.517 14:37:26 -- common/autotest_common.sh@10 -- # set +x 00:04:48.517 ************************************ 00:04:48.517 START TEST json_config_extra_key 00:04:48.517 ************************************ 00:04:48.517 14:37:26 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:48.517 14:37:26 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:48.517 14:37:26 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:48.517 14:37:26 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:48.517 14:37:26 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:48.517 14:37:26 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.517 14:37:26 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.517 14:37:26 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.517 14:37:26 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.517 14:37:26 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.517 14:37:26 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.517 14:37:26 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.517 14:37:26 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.517 14:37:26 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.517 14:37:26 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.517 14:37:26 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.517 14:37:26 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:48.517 14:37:26 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:48.517 14:37:26 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.517 14:37:26 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.517 14:37:26 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:48.517 14:37:26 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:48.517 14:37:26 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.517 14:37:26 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:48.517 14:37:26 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.517 14:37:26 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:48.517 14:37:26 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:48.518 14:37:26 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.518 14:37:26 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:48.518 14:37:26 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.518 14:37:26 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.518 14:37:26 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.518 14:37:26 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:48.518 14:37:26 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.518 14:37:26 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:48.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.518 --rc genhtml_branch_coverage=1 00:04:48.518 --rc genhtml_function_coverage=1 00:04:48.518 --rc genhtml_legend=1 00:04:48.518 --rc geninfo_all_blocks=1 00:04:48.518 --rc geninfo_unexecuted_blocks=1 00:04:48.518 00:04:48.518 ' 00:04:48.518 14:37:26 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:48.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.518 --rc genhtml_branch_coverage=1 00:04:48.518 --rc genhtml_function_coverage=1 00:04:48.518 --rc genhtml_legend=1 00:04:48.518 --rc geninfo_all_blocks=1 00:04:48.518 --rc geninfo_unexecuted_blocks=1 00:04:48.518 00:04:48.518 ' 00:04:48.518 14:37:26 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:48.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.518 --rc genhtml_branch_coverage=1 00:04:48.518 --rc genhtml_function_coverage=1 00:04:48.518 --rc genhtml_legend=1 00:04:48.518 --rc geninfo_all_blocks=1 00:04:48.518 --rc geninfo_unexecuted_blocks=1 00:04:48.518 00:04:48.518 ' 00:04:48.518 14:37:26 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:48.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.518 --rc genhtml_branch_coverage=1 00:04:48.518 --rc genhtml_function_coverage=1 00:04:48.518 --rc genhtml_legend=1 00:04:48.518 --rc geninfo_all_blocks=1 00:04:48.518 --rc geninfo_unexecuted_blocks=1 00:04:48.518 00:04:48.518 ' 00:04:48.518 14:37:26 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:48.518 14:37:26 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:48.518 14:37:26 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:48.518 14:37:26 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:48.518 14:37:26 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:48.518 14:37:26 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:48.518 14:37:26 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:48.518 14:37:26 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:48.518 14:37:26 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:48.518 14:37:26 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:48.518 14:37:26 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:48.518 14:37:26 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:48.518 14:37:26 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77e609b4-a18d-4719-b7e6-68133c864077 00:04:48.518 14:37:26 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=77e609b4-a18d-4719-b7e6-68133c864077 00:04:48.518 14:37:26 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:48.518 14:37:26 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:48.518 14:37:26 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:48.518 14:37:26 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:48.518 14:37:26 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:48.518 14:37:26 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:48.518 14:37:26 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:48.518 14:37:26 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:48.518 14:37:26 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:48.518 14:37:26 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.518 14:37:26 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.518 14:37:26 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.518 14:37:26 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:48.518 14:37:26 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.518 14:37:26 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:48.518 14:37:26 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:48.518 14:37:26 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:48.518 14:37:26 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:48.518 14:37:26 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:48.518 14:37:26 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:48.778 14:37:26 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:48.778 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:48.778 14:37:26 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:48.778 14:37:26 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:48.778 14:37:26 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:48.778 14:37:26 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:48.778 14:37:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:48.778 14:37:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:48.778 14:37:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:48.778 14:37:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:48.778 14:37:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:48.778 14:37:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:48.778 14:37:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:48.778 14:37:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:48.778 14:37:26 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:48.778 INFO: launching applications... 00:04:48.778 14:37:26 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:48.778 14:37:26 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:48.778 14:37:26 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:48.778 14:37:26 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:48.778 14:37:26 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:48.778 14:37:26 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:48.778 14:37:26 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:48.778 14:37:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:48.778 14:37:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:48.778 14:37:26 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58927 00:04:48.778 Waiting for target to run... 00:04:48.778 14:37:26 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:48.778 14:37:26 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58927 /var/tmp/spdk_tgt.sock 00:04:48.778 14:37:26 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58927 ']' 00:04:48.778 14:37:26 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:48.778 14:37:26 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:48.778 14:37:26 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:48.778 14:37:26 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.778 14:37:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:48.778 14:37:26 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:48.778 [2024-12-09 14:37:26.756020] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:04:48.778 [2024-12-09 14:37:26.756207] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58927 ] 00:04:49.037 [2024-12-09 14:37:27.149335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.296 [2024-12-09 14:37:27.288978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.248 14:37:28 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.248 00:04:50.248 14:37:28 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:50.248 14:37:28 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:50.248 INFO: shutting down applications... 00:04:50.248 14:37:28 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:50.248 14:37:28 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:50.248 14:37:28 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:50.248 14:37:28 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:50.248 14:37:28 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58927 ]] 00:04:50.248 14:37:28 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58927 00:04:50.248 14:37:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:50.248 14:37:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:50.248 14:37:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58927 00:04:50.248 14:37:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:50.816 14:37:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:50.816 14:37:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:50.816 14:37:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58927 00:04:50.816 14:37:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:51.075 14:37:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:51.075 14:37:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.075 14:37:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58927 00:04:51.075 14:37:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:51.640 14:37:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:51.640 14:37:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.641 14:37:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58927 00:04:51.641 14:37:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:52.212 14:37:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:52.212 14:37:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:52.212 14:37:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58927 00:04:52.212 14:37:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:52.781 14:37:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:52.781 14:37:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:52.781 14:37:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58927 00:04:52.781 14:37:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:53.346 14:37:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:53.346 14:37:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.346 14:37:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58927 00:04:53.346 14:37:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:53.605 14:37:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:53.605 14:37:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.605 14:37:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58927 00:04:53.605 14:37:31 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:53.605 14:37:31 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:53.605 14:37:31 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:53.605 SPDK target shutdown done 00:04:53.605 14:37:31 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:53.605 Success 00:04:53.605 14:37:31 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:53.605 00:04:53.605 real 0m5.251s 00:04:53.605 user 0m4.966s 00:04:53.605 sys 0m0.595s 00:04:53.605 14:37:31 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.605 14:37:31 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:53.605 ************************************ 00:04:53.605 END TEST json_config_extra_key 00:04:53.605 ************************************ 00:04:53.866 14:37:31 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:53.866 14:37:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.866 14:37:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.866 14:37:31 -- common/autotest_common.sh@10 -- # set +x 00:04:53.866 ************************************ 00:04:53.866 START TEST alias_rpc 00:04:53.866 ************************************ 00:04:53.866 14:37:31 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:53.866 * Looking for test storage... 00:04:53.866 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:53.866 14:37:31 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:53.866 14:37:31 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:53.866 14:37:31 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:53.866 14:37:31 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:53.866 14:37:31 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.866 14:37:31 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.866 14:37:31 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.866 14:37:31 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.866 14:37:31 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.866 14:37:31 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.866 14:37:31 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.866 14:37:31 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.866 14:37:31 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.866 14:37:31 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.866 14:37:31 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.866 14:37:31 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:53.866 14:37:31 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:53.866 14:37:31 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.866 14:37:31 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.866 14:37:31 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:53.866 14:37:31 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:53.866 14:37:31 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.866 14:37:31 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:53.866 14:37:31 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.866 14:37:31 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:53.866 14:37:31 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:53.866 14:37:31 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.866 14:37:31 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:53.866 14:37:31 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.866 14:37:31 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.866 14:37:31 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.866 14:37:31 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:53.866 14:37:31 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.866 14:37:31 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:53.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.866 --rc genhtml_branch_coverage=1 00:04:53.866 --rc genhtml_function_coverage=1 00:04:53.866 --rc genhtml_legend=1 00:04:53.866 --rc geninfo_all_blocks=1 00:04:53.866 --rc geninfo_unexecuted_blocks=1 00:04:53.866 00:04:53.866 ' 00:04:53.866 14:37:31 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:53.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.866 --rc genhtml_branch_coverage=1 00:04:53.866 --rc genhtml_function_coverage=1 00:04:53.866 --rc genhtml_legend=1 00:04:53.866 --rc geninfo_all_blocks=1 00:04:53.866 --rc geninfo_unexecuted_blocks=1 00:04:53.866 00:04:53.866 ' 00:04:53.866 14:37:31 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:53.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.866 --rc genhtml_branch_coverage=1 00:04:53.866 --rc genhtml_function_coverage=1 00:04:53.866 --rc genhtml_legend=1 00:04:53.866 --rc geninfo_all_blocks=1 00:04:53.866 --rc geninfo_unexecuted_blocks=1 00:04:53.866 00:04:53.866 ' 00:04:53.866 14:37:31 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:53.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.866 --rc genhtml_branch_coverage=1 00:04:53.866 --rc genhtml_function_coverage=1 00:04:53.866 --rc genhtml_legend=1 00:04:53.866 --rc geninfo_all_blocks=1 00:04:53.866 --rc geninfo_unexecuted_blocks=1 00:04:53.866 00:04:53.866 ' 00:04:53.866 14:37:31 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:53.866 14:37:31 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59051 00:04:53.866 14:37:31 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59051 00:04:53.866 14:37:31 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:53.866 14:37:31 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 59051 ']' 00:04:53.866 14:37:31 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.866 14:37:31 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.866 14:37:31 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.866 14:37:31 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.866 14:37:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.127 [2024-12-09 14:37:32.037074] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:04:54.127 [2024-12-09 14:37:32.037247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59051 ] 00:04:54.127 [2024-12-09 14:37:32.217651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.386 [2024-12-09 14:37:32.352173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.326 14:37:33 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:55.326 14:37:33 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:55.326 14:37:33 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:55.586 14:37:33 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59051 00:04:55.586 14:37:33 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 59051 ']' 00:04:55.586 14:37:33 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 59051 00:04:55.586 14:37:33 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:55.586 14:37:33 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:55.586 14:37:33 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59051 00:04:55.586 14:37:33 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:55.586 14:37:33 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:55.586 14:37:33 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59051' 00:04:55.586 killing process with pid 59051 00:04:55.586 14:37:33 alias_rpc -- common/autotest_common.sh@973 -- # kill 59051 00:04:55.586 14:37:33 alias_rpc -- common/autotest_common.sh@978 -- # wait 59051 00:04:58.882 00:04:58.882 real 0m4.651s 00:04:58.882 user 0m4.765s 00:04:58.882 sys 0m0.583s 00:04:58.882 14:37:36 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.882 14:37:36 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.882 ************************************ 00:04:58.882 END TEST alias_rpc 00:04:58.882 ************************************ 00:04:58.882 14:37:36 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:58.882 14:37:36 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:58.882 14:37:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.882 14:37:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.882 14:37:36 -- common/autotest_common.sh@10 -- # set +x 00:04:58.882 ************************************ 00:04:58.882 START TEST spdkcli_tcp 00:04:58.882 ************************************ 00:04:58.882 14:37:36 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:58.882 * Looking for test storage... 00:04:58.882 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:58.882 14:37:36 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:58.882 14:37:36 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:58.882 14:37:36 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:58.882 14:37:36 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:58.882 14:37:36 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.882 14:37:36 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.882 14:37:36 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.882 14:37:36 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.882 14:37:36 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.882 14:37:36 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.882 14:37:36 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.882 14:37:36 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.882 14:37:36 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.882 14:37:36 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.882 14:37:36 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.882 14:37:36 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:58.882 14:37:36 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:58.882 14:37:36 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.882 14:37:36 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.882 14:37:36 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:58.882 14:37:36 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:58.882 14:37:36 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.882 14:37:36 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:58.882 14:37:36 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.882 14:37:36 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:58.882 14:37:36 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:58.882 14:37:36 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.882 14:37:36 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:58.882 14:37:36 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.882 14:37:36 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.882 14:37:36 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.882 14:37:36 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:58.882 14:37:36 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.882 14:37:36 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:58.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.882 --rc genhtml_branch_coverage=1 00:04:58.882 --rc genhtml_function_coverage=1 00:04:58.882 --rc genhtml_legend=1 00:04:58.882 --rc geninfo_all_blocks=1 00:04:58.882 --rc geninfo_unexecuted_blocks=1 00:04:58.882 00:04:58.882 ' 00:04:58.882 14:37:36 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:58.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.882 --rc genhtml_branch_coverage=1 00:04:58.882 --rc genhtml_function_coverage=1 00:04:58.882 --rc genhtml_legend=1 00:04:58.882 --rc geninfo_all_blocks=1 00:04:58.883 --rc geninfo_unexecuted_blocks=1 00:04:58.883 00:04:58.883 ' 00:04:58.883 14:37:36 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:58.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.883 --rc genhtml_branch_coverage=1 00:04:58.883 --rc genhtml_function_coverage=1 00:04:58.883 --rc genhtml_legend=1 00:04:58.883 --rc geninfo_all_blocks=1 00:04:58.883 --rc geninfo_unexecuted_blocks=1 00:04:58.883 00:04:58.883 ' 00:04:58.883 14:37:36 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:58.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.883 --rc genhtml_branch_coverage=1 00:04:58.883 --rc genhtml_function_coverage=1 00:04:58.883 --rc genhtml_legend=1 00:04:58.883 --rc geninfo_all_blocks=1 00:04:58.883 --rc geninfo_unexecuted_blocks=1 00:04:58.883 00:04:58.883 ' 00:04:58.883 14:37:36 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:58.883 14:37:36 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:58.883 14:37:36 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:58.883 14:37:36 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:58.883 14:37:36 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:58.883 14:37:36 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:58.883 14:37:36 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:58.883 14:37:36 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:58.883 14:37:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:58.883 14:37:36 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59163 00:04:58.883 14:37:36 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:58.883 14:37:36 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59163 00:04:58.883 14:37:36 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 59163 ']' 00:04:58.883 14:37:36 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.883 14:37:36 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.883 14:37:36 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.883 14:37:36 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.883 14:37:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:58.883 [2024-12-09 14:37:36.782643] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:04:58.883 [2024-12-09 14:37:36.782788] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59163 ] 00:04:58.883 [2024-12-09 14:37:36.963433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:59.141 [2024-12-09 14:37:37.104350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.141 [2024-12-09 14:37:37.104361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.077 14:37:38 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.077 14:37:38 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:00.077 14:37:38 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59186 00:05:00.077 14:37:38 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:00.077 14:37:38 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:00.337 [ 00:05:00.337 "bdev_malloc_delete", 00:05:00.337 "bdev_malloc_create", 00:05:00.337 "bdev_null_resize", 00:05:00.337 "bdev_null_delete", 00:05:00.337 "bdev_null_create", 00:05:00.337 "bdev_nvme_cuse_unregister", 00:05:00.337 "bdev_nvme_cuse_register", 00:05:00.337 "bdev_opal_new_user", 00:05:00.337 "bdev_opal_set_lock_state", 00:05:00.337 "bdev_opal_delete", 00:05:00.337 "bdev_opal_get_info", 00:05:00.337 "bdev_opal_create", 00:05:00.337 "bdev_nvme_opal_revert", 00:05:00.337 "bdev_nvme_opal_init", 00:05:00.337 "bdev_nvme_send_cmd", 00:05:00.337 "bdev_nvme_set_keys", 00:05:00.337 "bdev_nvme_get_path_iostat", 00:05:00.337 "bdev_nvme_get_mdns_discovery_info", 00:05:00.337 "bdev_nvme_stop_mdns_discovery", 00:05:00.338 "bdev_nvme_start_mdns_discovery", 00:05:00.338 "bdev_nvme_set_multipath_policy", 00:05:00.338 "bdev_nvme_set_preferred_path", 00:05:00.338 "bdev_nvme_get_io_paths", 00:05:00.338 "bdev_nvme_remove_error_injection", 00:05:00.338 "bdev_nvme_add_error_injection", 00:05:00.338 "bdev_nvme_get_discovery_info", 00:05:00.338 "bdev_nvme_stop_discovery", 00:05:00.338 "bdev_nvme_start_discovery", 00:05:00.338 "bdev_nvme_get_controller_health_info", 00:05:00.338 "bdev_nvme_disable_controller", 00:05:00.338 "bdev_nvme_enable_controller", 00:05:00.338 "bdev_nvme_reset_controller", 00:05:00.338 "bdev_nvme_get_transport_statistics", 00:05:00.338 "bdev_nvme_apply_firmware", 00:05:00.338 "bdev_nvme_detach_controller", 00:05:00.338 "bdev_nvme_get_controllers", 00:05:00.338 "bdev_nvme_attach_controller", 00:05:00.338 "bdev_nvme_set_hotplug", 00:05:00.338 "bdev_nvme_set_options", 00:05:00.338 "bdev_passthru_delete", 00:05:00.338 "bdev_passthru_create", 00:05:00.338 "bdev_lvol_set_parent_bdev", 00:05:00.338 "bdev_lvol_set_parent", 00:05:00.338 "bdev_lvol_check_shallow_copy", 00:05:00.338 "bdev_lvol_start_shallow_copy", 00:05:00.338 "bdev_lvol_grow_lvstore", 00:05:00.338 "bdev_lvol_get_lvols", 00:05:00.338 "bdev_lvol_get_lvstores", 00:05:00.338 "bdev_lvol_delete", 00:05:00.338 "bdev_lvol_set_read_only", 00:05:00.338 "bdev_lvol_resize", 00:05:00.338 "bdev_lvol_decouple_parent", 00:05:00.338 "bdev_lvol_inflate", 00:05:00.338 "bdev_lvol_rename", 00:05:00.338 "bdev_lvol_clone_bdev", 00:05:00.338 "bdev_lvol_clone", 00:05:00.338 "bdev_lvol_snapshot", 00:05:00.338 "bdev_lvol_create", 00:05:00.338 "bdev_lvol_delete_lvstore", 00:05:00.338 "bdev_lvol_rename_lvstore", 00:05:00.338 "bdev_lvol_create_lvstore", 00:05:00.338 "bdev_raid_set_options", 00:05:00.338 "bdev_raid_remove_base_bdev", 00:05:00.338 "bdev_raid_add_base_bdev", 00:05:00.338 "bdev_raid_delete", 00:05:00.338 "bdev_raid_create", 00:05:00.338 "bdev_raid_get_bdevs", 00:05:00.338 "bdev_error_inject_error", 00:05:00.338 "bdev_error_delete", 00:05:00.338 "bdev_error_create", 00:05:00.338 "bdev_split_delete", 00:05:00.338 "bdev_split_create", 00:05:00.338 "bdev_delay_delete", 00:05:00.338 "bdev_delay_create", 00:05:00.338 "bdev_delay_update_latency", 00:05:00.338 "bdev_zone_block_delete", 00:05:00.338 "bdev_zone_block_create", 00:05:00.338 "blobfs_create", 00:05:00.338 "blobfs_detect", 00:05:00.338 "blobfs_set_cache_size", 00:05:00.338 "bdev_aio_delete", 00:05:00.338 "bdev_aio_rescan", 00:05:00.338 "bdev_aio_create", 00:05:00.338 "bdev_ftl_set_property", 00:05:00.338 "bdev_ftl_get_properties", 00:05:00.338 "bdev_ftl_get_stats", 00:05:00.338 "bdev_ftl_unmap", 00:05:00.338 "bdev_ftl_unload", 00:05:00.338 "bdev_ftl_delete", 00:05:00.338 "bdev_ftl_load", 00:05:00.338 "bdev_ftl_create", 00:05:00.338 "bdev_virtio_attach_controller", 00:05:00.338 "bdev_virtio_scsi_get_devices", 00:05:00.338 "bdev_virtio_detach_controller", 00:05:00.338 "bdev_virtio_blk_set_hotplug", 00:05:00.338 "bdev_iscsi_delete", 00:05:00.338 "bdev_iscsi_create", 00:05:00.338 "bdev_iscsi_set_options", 00:05:00.338 "accel_error_inject_error", 00:05:00.338 "ioat_scan_accel_module", 00:05:00.338 "dsa_scan_accel_module", 00:05:00.338 "iaa_scan_accel_module", 00:05:00.338 "keyring_file_remove_key", 00:05:00.338 "keyring_file_add_key", 00:05:00.338 "keyring_linux_set_options", 00:05:00.338 "fsdev_aio_delete", 00:05:00.338 "fsdev_aio_create", 00:05:00.338 "iscsi_get_histogram", 00:05:00.338 "iscsi_enable_histogram", 00:05:00.338 "iscsi_set_options", 00:05:00.338 "iscsi_get_auth_groups", 00:05:00.338 "iscsi_auth_group_remove_secret", 00:05:00.338 "iscsi_auth_group_add_secret", 00:05:00.338 "iscsi_delete_auth_group", 00:05:00.338 "iscsi_create_auth_group", 00:05:00.338 "iscsi_set_discovery_auth", 00:05:00.338 "iscsi_get_options", 00:05:00.338 "iscsi_target_node_request_logout", 00:05:00.338 "iscsi_target_node_set_redirect", 00:05:00.338 "iscsi_target_node_set_auth", 00:05:00.338 "iscsi_target_node_add_lun", 00:05:00.338 "iscsi_get_stats", 00:05:00.338 "iscsi_get_connections", 00:05:00.338 "iscsi_portal_group_set_auth", 00:05:00.338 "iscsi_start_portal_group", 00:05:00.338 "iscsi_delete_portal_group", 00:05:00.338 "iscsi_create_portal_group", 00:05:00.338 "iscsi_get_portal_groups", 00:05:00.338 "iscsi_delete_target_node", 00:05:00.338 "iscsi_target_node_remove_pg_ig_maps", 00:05:00.338 "iscsi_target_node_add_pg_ig_maps", 00:05:00.338 "iscsi_create_target_node", 00:05:00.338 "iscsi_get_target_nodes", 00:05:00.338 "iscsi_delete_initiator_group", 00:05:00.338 "iscsi_initiator_group_remove_initiators", 00:05:00.338 "iscsi_initiator_group_add_initiators", 00:05:00.338 "iscsi_create_initiator_group", 00:05:00.338 "iscsi_get_initiator_groups", 00:05:00.338 "nvmf_set_crdt", 00:05:00.338 "nvmf_set_config", 00:05:00.338 "nvmf_set_max_subsystems", 00:05:00.338 "nvmf_stop_mdns_prr", 00:05:00.338 "nvmf_publish_mdns_prr", 00:05:00.338 "nvmf_subsystem_get_listeners", 00:05:00.338 "nvmf_subsystem_get_qpairs", 00:05:00.338 "nvmf_subsystem_get_controllers", 00:05:00.338 "nvmf_get_stats", 00:05:00.338 "nvmf_get_transports", 00:05:00.338 "nvmf_create_transport", 00:05:00.338 "nvmf_get_targets", 00:05:00.338 "nvmf_delete_target", 00:05:00.338 "nvmf_create_target", 00:05:00.338 "nvmf_subsystem_allow_any_host", 00:05:00.338 "nvmf_subsystem_set_keys", 00:05:00.338 "nvmf_subsystem_remove_host", 00:05:00.338 "nvmf_subsystem_add_host", 00:05:00.338 "nvmf_ns_remove_host", 00:05:00.338 "nvmf_ns_add_host", 00:05:00.338 "nvmf_subsystem_remove_ns", 00:05:00.338 "nvmf_subsystem_set_ns_ana_group", 00:05:00.338 "nvmf_subsystem_add_ns", 00:05:00.338 "nvmf_subsystem_listener_set_ana_state", 00:05:00.338 "nvmf_discovery_get_referrals", 00:05:00.338 "nvmf_discovery_remove_referral", 00:05:00.338 "nvmf_discovery_add_referral", 00:05:00.338 "nvmf_subsystem_remove_listener", 00:05:00.338 "nvmf_subsystem_add_listener", 00:05:00.338 "nvmf_delete_subsystem", 00:05:00.338 "nvmf_create_subsystem", 00:05:00.338 "nvmf_get_subsystems", 00:05:00.338 "env_dpdk_get_mem_stats", 00:05:00.338 "nbd_get_disks", 00:05:00.338 "nbd_stop_disk", 00:05:00.338 "nbd_start_disk", 00:05:00.338 "ublk_recover_disk", 00:05:00.338 "ublk_get_disks", 00:05:00.338 "ublk_stop_disk", 00:05:00.338 "ublk_start_disk", 00:05:00.338 "ublk_destroy_target", 00:05:00.338 "ublk_create_target", 00:05:00.338 "virtio_blk_create_transport", 00:05:00.338 "virtio_blk_get_transports", 00:05:00.338 "vhost_controller_set_coalescing", 00:05:00.338 "vhost_get_controllers", 00:05:00.338 "vhost_delete_controller", 00:05:00.338 "vhost_create_blk_controller", 00:05:00.338 "vhost_scsi_controller_remove_target", 00:05:00.338 "vhost_scsi_controller_add_target", 00:05:00.338 "vhost_start_scsi_controller", 00:05:00.338 "vhost_create_scsi_controller", 00:05:00.338 "thread_set_cpumask", 00:05:00.338 "scheduler_set_options", 00:05:00.338 "framework_get_governor", 00:05:00.338 "framework_get_scheduler", 00:05:00.338 "framework_set_scheduler", 00:05:00.338 "framework_get_reactors", 00:05:00.338 "thread_get_io_channels", 00:05:00.338 "thread_get_pollers", 00:05:00.339 "thread_get_stats", 00:05:00.339 "framework_monitor_context_switch", 00:05:00.339 "spdk_kill_instance", 00:05:00.339 "log_enable_timestamps", 00:05:00.339 "log_get_flags", 00:05:00.339 "log_clear_flag", 00:05:00.339 "log_set_flag", 00:05:00.339 "log_get_level", 00:05:00.339 "log_set_level", 00:05:00.339 "log_get_print_level", 00:05:00.339 "log_set_print_level", 00:05:00.339 "framework_enable_cpumask_locks", 00:05:00.339 "framework_disable_cpumask_locks", 00:05:00.339 "framework_wait_init", 00:05:00.339 "framework_start_init", 00:05:00.339 "scsi_get_devices", 00:05:00.339 "bdev_get_histogram", 00:05:00.339 "bdev_enable_histogram", 00:05:00.339 "bdev_set_qos_limit", 00:05:00.339 "bdev_set_qd_sampling_period", 00:05:00.339 "bdev_get_bdevs", 00:05:00.339 "bdev_reset_iostat", 00:05:00.339 "bdev_get_iostat", 00:05:00.339 "bdev_examine", 00:05:00.339 "bdev_wait_for_examine", 00:05:00.339 "bdev_set_options", 00:05:00.339 "accel_get_stats", 00:05:00.339 "accel_set_options", 00:05:00.339 "accel_set_driver", 00:05:00.339 "accel_crypto_key_destroy", 00:05:00.339 "accel_crypto_keys_get", 00:05:00.339 "accel_crypto_key_create", 00:05:00.339 "accel_assign_opc", 00:05:00.339 "accel_get_module_info", 00:05:00.339 "accel_get_opc_assignments", 00:05:00.339 "vmd_rescan", 00:05:00.339 "vmd_remove_device", 00:05:00.339 "vmd_enable", 00:05:00.339 "sock_get_default_impl", 00:05:00.339 "sock_set_default_impl", 00:05:00.339 "sock_impl_set_options", 00:05:00.339 "sock_impl_get_options", 00:05:00.339 "iobuf_get_stats", 00:05:00.339 "iobuf_set_options", 00:05:00.339 "keyring_get_keys", 00:05:00.339 "framework_get_pci_devices", 00:05:00.339 "framework_get_config", 00:05:00.339 "framework_get_subsystems", 00:05:00.339 "fsdev_set_opts", 00:05:00.339 "fsdev_get_opts", 00:05:00.339 "trace_get_info", 00:05:00.339 "trace_get_tpoint_group_mask", 00:05:00.339 "trace_disable_tpoint_group", 00:05:00.339 "trace_enable_tpoint_group", 00:05:00.339 "trace_clear_tpoint_mask", 00:05:00.339 "trace_set_tpoint_mask", 00:05:00.339 "notify_get_notifications", 00:05:00.339 "notify_get_types", 00:05:00.339 "spdk_get_version", 00:05:00.339 "rpc_get_methods" 00:05:00.339 ] 00:05:00.339 14:37:38 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:00.339 14:37:38 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:00.339 14:37:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:00.339 14:37:38 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:00.339 14:37:38 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59163 00:05:00.339 14:37:38 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 59163 ']' 00:05:00.339 14:37:38 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 59163 00:05:00.339 14:37:38 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:00.599 14:37:38 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.599 14:37:38 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59163 00:05:00.599 14:37:38 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:00.599 14:37:38 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:00.599 killing process with pid 59163 00:05:00.599 14:37:38 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59163' 00:05:00.599 14:37:38 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 59163 00:05:00.599 14:37:38 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 59163 00:05:03.203 ************************************ 00:05:03.203 END TEST spdkcli_tcp 00:05:03.203 ************************************ 00:05:03.203 00:05:03.203 real 0m4.705s 00:05:03.203 user 0m8.572s 00:05:03.203 sys 0m0.656s 00:05:03.203 14:37:41 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.203 14:37:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:03.203 14:37:41 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:03.203 14:37:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.203 14:37:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.203 14:37:41 -- common/autotest_common.sh@10 -- # set +x 00:05:03.203 ************************************ 00:05:03.203 START TEST dpdk_mem_utility 00:05:03.203 ************************************ 00:05:03.203 14:37:41 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:03.203 * Looking for test storage... 00:05:03.463 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:03.463 14:37:41 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:03.463 14:37:41 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:03.463 14:37:41 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:03.463 14:37:41 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:03.463 14:37:41 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.463 14:37:41 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.463 14:37:41 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.463 14:37:41 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.463 14:37:41 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.463 14:37:41 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.463 14:37:41 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.463 14:37:41 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.463 14:37:41 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.463 14:37:41 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.463 14:37:41 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.463 14:37:41 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:03.463 14:37:41 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:03.463 14:37:41 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.463 14:37:41 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.463 14:37:41 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:03.463 14:37:41 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:03.463 14:37:41 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.463 14:37:41 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:03.463 14:37:41 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.463 14:37:41 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:03.463 14:37:41 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:03.463 14:37:41 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.463 14:37:41 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:03.463 14:37:41 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.463 14:37:41 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.463 14:37:41 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.463 14:37:41 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:03.463 14:37:41 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.463 14:37:41 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:03.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.463 --rc genhtml_branch_coverage=1 00:05:03.463 --rc genhtml_function_coverage=1 00:05:03.463 --rc genhtml_legend=1 00:05:03.463 --rc geninfo_all_blocks=1 00:05:03.463 --rc geninfo_unexecuted_blocks=1 00:05:03.463 00:05:03.463 ' 00:05:03.463 14:37:41 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:03.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.463 --rc genhtml_branch_coverage=1 00:05:03.463 --rc genhtml_function_coverage=1 00:05:03.463 --rc genhtml_legend=1 00:05:03.463 --rc geninfo_all_blocks=1 00:05:03.463 --rc geninfo_unexecuted_blocks=1 00:05:03.463 00:05:03.463 ' 00:05:03.463 14:37:41 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:03.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.463 --rc genhtml_branch_coverage=1 00:05:03.463 --rc genhtml_function_coverage=1 00:05:03.463 --rc genhtml_legend=1 00:05:03.463 --rc geninfo_all_blocks=1 00:05:03.463 --rc geninfo_unexecuted_blocks=1 00:05:03.463 00:05:03.463 ' 00:05:03.463 14:37:41 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:03.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.463 --rc genhtml_branch_coverage=1 00:05:03.463 --rc genhtml_function_coverage=1 00:05:03.463 --rc genhtml_legend=1 00:05:03.463 --rc geninfo_all_blocks=1 00:05:03.463 --rc geninfo_unexecuted_blocks=1 00:05:03.463 00:05:03.463 ' 00:05:03.463 14:37:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:03.463 14:37:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59291 00:05:03.463 14:37:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59291 00:05:03.463 14:37:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:03.463 14:37:41 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59291 ']' 00:05:03.463 14:37:41 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.463 14:37:41 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.463 14:37:41 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.463 14:37:41 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.463 14:37:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:03.463 [2024-12-09 14:37:41.532350] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:05:03.463 [2024-12-09 14:37:41.532478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59291 ] 00:05:03.722 [2024-12-09 14:37:41.709877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.722 [2024-12-09 14:37:41.833097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.103 14:37:42 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.103 14:37:42 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:05.103 14:37:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:05.103 14:37:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:05.103 14:37:42 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.103 14:37:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:05.103 { 00:05:05.103 "filename": "/tmp/spdk_mem_dump.txt" 00:05:05.103 } 00:05:05.103 14:37:42 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.103 14:37:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:05.103 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:05.103 1 heaps totaling size 824.000000 MiB 00:05:05.103 size: 824.000000 MiB heap id: 0 00:05:05.103 end heaps---------- 00:05:05.103 9 mempools totaling size 603.782043 MiB 00:05:05.103 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:05.103 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:05.103 size: 100.555481 MiB name: bdev_io_59291 00:05:05.103 size: 50.003479 MiB name: msgpool_59291 00:05:05.103 size: 36.509338 MiB name: fsdev_io_59291 00:05:05.103 size: 21.763794 MiB name: PDU_Pool 00:05:05.103 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:05.103 size: 4.133484 MiB name: evtpool_59291 00:05:05.103 size: 0.026123 MiB name: Session_Pool 00:05:05.103 end mempools------- 00:05:05.103 6 memzones totaling size 4.142822 MiB 00:05:05.103 size: 1.000366 MiB name: RG_ring_0_59291 00:05:05.103 size: 1.000366 MiB name: RG_ring_1_59291 00:05:05.103 size: 1.000366 MiB name: RG_ring_4_59291 00:05:05.103 size: 1.000366 MiB name: RG_ring_5_59291 00:05:05.103 size: 0.125366 MiB name: RG_ring_2_59291 00:05:05.103 size: 0.015991 MiB name: RG_ring_3_59291 00:05:05.103 end memzones------- 00:05:05.103 14:37:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:05.103 heap id: 0 total size: 824.000000 MiB number of busy elements: 315 number of free elements: 18 00:05:05.103 list of free elements. size: 16.781372 MiB 00:05:05.103 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:05.103 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:05.103 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:05.103 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:05.103 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:05.103 element at address: 0x200019a00000 with size: 0.999084 MiB 00:05:05.103 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:05.103 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:05.103 element at address: 0x200019200000 with size: 0.959656 MiB 00:05:05.103 element at address: 0x200019d00040 with size: 0.936401 MiB 00:05:05.103 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:05.103 element at address: 0x20001b400000 with size: 0.562683 MiB 00:05:05.103 element at address: 0x200000c00000 with size: 0.489197 MiB 00:05:05.103 element at address: 0x200019600000 with size: 0.487976 MiB 00:05:05.103 element at address: 0x200019e00000 with size: 0.485413 MiB 00:05:05.103 element at address: 0x200012c00000 with size: 0.433472 MiB 00:05:05.103 element at address: 0x200028800000 with size: 0.390442 MiB 00:05:05.103 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:05.103 list of standard malloc elements. size: 199.287720 MiB 00:05:05.103 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:05.103 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:05.103 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:05.103 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:05.103 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:05.103 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:05.103 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:05.103 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:05.103 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:05.103 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:05:05.103 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:05.103 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:05.103 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:05.103 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:05.103 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:05.103 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:05.103 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:05.103 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:05.103 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:05.103 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:05.103 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:05.103 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:05.103 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:05.103 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:05.103 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:05.103 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:05.103 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:05.103 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:05.103 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:05.103 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:05.103 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:05.103 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:05.103 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:05.103 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:05.103 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:05.103 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:05.103 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:05.103 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:05.103 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:05.103 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:05.103 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:05.103 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:05.103 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:05.103 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:05.103 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:05.103 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:05.104 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200019affc40 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:05:05.104 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:05:05.105 element at address: 0x200028863f40 with size: 0.000244 MiB 00:05:05.105 element at address: 0x200028864040 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886af80 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886b080 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886b180 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886b280 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886b380 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886b480 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886b580 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886b680 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886b780 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886b880 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886b980 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886be80 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886c080 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886c180 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886c280 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886c380 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886c480 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886c580 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886c680 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886c780 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886c880 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886c980 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886d080 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886d180 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886d280 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886d380 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886d480 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886d580 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886d680 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886d780 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886d880 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886d980 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886da80 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886db80 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886de80 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886df80 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886e080 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886e180 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886e280 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886e380 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886e480 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886e580 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886e680 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886e780 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886e880 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886e980 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886f080 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886f180 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886f280 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886f380 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886f480 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886f580 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886f680 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886f780 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886f880 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886f980 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:05:05.105 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:05:05.105 list of memzone associated elements. size: 607.930908 MiB 00:05:05.105 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:05.105 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:05.105 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:05.105 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:05.105 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:05.105 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59291_0 00:05:05.105 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:05.105 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59291_0 00:05:05.105 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:05.105 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59291_0 00:05:05.105 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:05.105 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:05.105 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:05.105 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:05.105 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:05.105 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59291_0 00:05:05.105 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:05.105 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59291 00:05:05.105 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:05.105 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59291 00:05:05.105 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:05.105 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:05.105 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:05.105 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:05.105 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:05.105 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:05.105 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:05.105 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:05.105 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:05.105 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59291 00:05:05.105 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:05.105 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59291 00:05:05.105 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:05.105 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59291 00:05:05.105 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:05.105 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59291 00:05:05.105 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:05.106 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59291 00:05:05.106 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:05.106 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59291 00:05:05.106 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:05:05.106 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:05.106 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:05:05.106 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:05.106 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:05:05.106 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:05.106 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:05.106 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59291 00:05:05.106 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:05.106 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59291 00:05:05.106 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:05:05.106 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:05.106 element at address: 0x200028864140 with size: 0.023804 MiB 00:05:05.106 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:05.106 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:05.106 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59291 00:05:05.106 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:05:05.106 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:05.106 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:05.106 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59291 00:05:05.106 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:05.106 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59291 00:05:05.106 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:05.106 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59291 00:05:05.106 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:05:05.106 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:05.106 14:37:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:05.106 14:37:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59291 00:05:05.106 14:37:42 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59291 ']' 00:05:05.106 14:37:42 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59291 00:05:05.106 14:37:42 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:05.106 14:37:42 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.106 14:37:42 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59291 00:05:05.106 14:37:43 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.106 killing process with pid 59291 00:05:05.106 14:37:43 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.106 14:37:43 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59291' 00:05:05.106 14:37:43 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59291 00:05:05.106 14:37:43 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59291 00:05:08.403 00:05:08.403 real 0m4.613s 00:05:08.403 user 0m4.557s 00:05:08.403 sys 0m0.601s 00:05:08.403 14:37:45 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.403 14:37:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:08.403 ************************************ 00:05:08.403 END TEST dpdk_mem_utility 00:05:08.403 ************************************ 00:05:08.403 14:37:45 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:08.403 14:37:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.403 14:37:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.403 14:37:45 -- common/autotest_common.sh@10 -- # set +x 00:05:08.403 ************************************ 00:05:08.403 START TEST event 00:05:08.403 ************************************ 00:05:08.403 14:37:45 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:08.403 * Looking for test storage... 00:05:08.403 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:08.403 14:37:45 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:08.403 14:37:46 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:08.403 14:37:46 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:08.403 14:37:46 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:08.403 14:37:46 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.403 14:37:46 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.403 14:37:46 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.403 14:37:46 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.403 14:37:46 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.403 14:37:46 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.403 14:37:46 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.403 14:37:46 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.403 14:37:46 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.403 14:37:46 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.403 14:37:46 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.403 14:37:46 event -- scripts/common.sh@344 -- # case "$op" in 00:05:08.403 14:37:46 event -- scripts/common.sh@345 -- # : 1 00:05:08.403 14:37:46 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.403 14:37:46 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.403 14:37:46 event -- scripts/common.sh@365 -- # decimal 1 00:05:08.403 14:37:46 event -- scripts/common.sh@353 -- # local d=1 00:05:08.403 14:37:46 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.403 14:37:46 event -- scripts/common.sh@355 -- # echo 1 00:05:08.403 14:37:46 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.403 14:37:46 event -- scripts/common.sh@366 -- # decimal 2 00:05:08.403 14:37:46 event -- scripts/common.sh@353 -- # local d=2 00:05:08.403 14:37:46 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.403 14:37:46 event -- scripts/common.sh@355 -- # echo 2 00:05:08.403 14:37:46 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.403 14:37:46 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.403 14:37:46 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.403 14:37:46 event -- scripts/common.sh@368 -- # return 0 00:05:08.403 14:37:46 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.403 14:37:46 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:08.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.403 --rc genhtml_branch_coverage=1 00:05:08.403 --rc genhtml_function_coverage=1 00:05:08.403 --rc genhtml_legend=1 00:05:08.403 --rc geninfo_all_blocks=1 00:05:08.403 --rc geninfo_unexecuted_blocks=1 00:05:08.403 00:05:08.403 ' 00:05:08.403 14:37:46 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:08.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.403 --rc genhtml_branch_coverage=1 00:05:08.403 --rc genhtml_function_coverage=1 00:05:08.403 --rc genhtml_legend=1 00:05:08.403 --rc geninfo_all_blocks=1 00:05:08.403 --rc geninfo_unexecuted_blocks=1 00:05:08.403 00:05:08.403 ' 00:05:08.403 14:37:46 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:08.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.403 --rc genhtml_branch_coverage=1 00:05:08.403 --rc genhtml_function_coverage=1 00:05:08.403 --rc genhtml_legend=1 00:05:08.403 --rc geninfo_all_blocks=1 00:05:08.403 --rc geninfo_unexecuted_blocks=1 00:05:08.403 00:05:08.403 ' 00:05:08.403 14:37:46 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:08.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.403 --rc genhtml_branch_coverage=1 00:05:08.403 --rc genhtml_function_coverage=1 00:05:08.403 --rc genhtml_legend=1 00:05:08.403 --rc geninfo_all_blocks=1 00:05:08.403 --rc geninfo_unexecuted_blocks=1 00:05:08.403 00:05:08.403 ' 00:05:08.403 14:37:46 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:08.403 14:37:46 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:08.403 14:37:46 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:08.403 14:37:46 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:08.403 14:37:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.403 14:37:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.403 ************************************ 00:05:08.403 START TEST event_perf 00:05:08.403 ************************************ 00:05:08.404 14:37:46 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:08.404 Running I/O for 1 seconds...[2024-12-09 14:37:46.171183] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:05:08.404 [2024-12-09 14:37:46.171337] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59404 ] 00:05:08.404 [2024-12-09 14:37:46.353695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:08.404 [2024-12-09 14:37:46.493157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.404 [2024-12-09 14:37:46.493238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:08.404 Running I/O for 1 seconds...[2024-12-09 14:37:46.493408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.404 [2024-12-09 14:37:46.493443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:09.779 00:05:09.779 lcore 0: 180508 00:05:09.779 lcore 1: 180507 00:05:09.779 lcore 2: 180506 00:05:09.779 lcore 3: 180507 00:05:09.779 done. 00:05:09.779 00:05:09.779 real 0m1.663s 00:05:09.779 user 0m4.404s 00:05:09.779 sys 0m0.128s 00:05:09.779 14:37:47 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.779 14:37:47 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:09.779 ************************************ 00:05:09.779 END TEST event_perf 00:05:09.779 ************************************ 00:05:09.779 14:37:47 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:09.779 14:37:47 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:09.779 14:37:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.779 14:37:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:09.779 ************************************ 00:05:09.779 START TEST event_reactor 00:05:09.779 ************************************ 00:05:09.779 14:37:47 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:10.038 [2024-12-09 14:37:47.901862] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:05:10.038 [2024-12-09 14:37:47.902058] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59444 ] 00:05:10.038 [2024-12-09 14:37:48.086604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.297 [2024-12-09 14:37:48.206326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.673 test_start 00:05:11.673 oneshot 00:05:11.673 tick 100 00:05:11.673 tick 100 00:05:11.673 tick 250 00:05:11.673 tick 100 00:05:11.673 tick 100 00:05:11.673 tick 100 00:05:11.673 tick 250 00:05:11.673 tick 500 00:05:11.673 tick 100 00:05:11.673 tick 100 00:05:11.673 tick 250 00:05:11.673 tick 100 00:05:11.673 tick 100 00:05:11.673 test_end 00:05:11.673 00:05:11.673 real 0m1.626s 00:05:11.673 user 0m1.414s 00:05:11.673 sys 0m0.102s 00:05:11.673 14:37:49 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.673 14:37:49 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:11.673 ************************************ 00:05:11.673 END TEST event_reactor 00:05:11.674 ************************************ 00:05:11.674 14:37:49 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:11.674 14:37:49 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:11.674 14:37:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.674 14:37:49 event -- common/autotest_common.sh@10 -- # set +x 00:05:11.674 ************************************ 00:05:11.674 START TEST event_reactor_perf 00:05:11.674 ************************************ 00:05:11.674 14:37:49 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:11.674 [2024-12-09 14:37:49.587330] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:05:11.674 [2024-12-09 14:37:49.587467] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59486 ] 00:05:11.674 [2024-12-09 14:37:49.763145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.933 [2024-12-09 14:37:49.899010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.353 test_start 00:05:13.353 test_end 00:05:13.353 Performance: 312999 events per second 00:05:13.353 00:05:13.353 real 0m1.618s 00:05:13.353 user 0m1.414s 00:05:13.353 sys 0m0.094s 00:05:13.353 14:37:51 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.353 14:37:51 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:13.353 ************************************ 00:05:13.353 END TEST event_reactor_perf 00:05:13.353 ************************************ 00:05:13.353 14:37:51 event -- event/event.sh@49 -- # uname -s 00:05:13.353 14:37:51 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:13.353 14:37:51 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:13.353 14:37:51 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.353 14:37:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.353 14:37:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:13.353 ************************************ 00:05:13.353 START TEST event_scheduler 00:05:13.353 ************************************ 00:05:13.353 14:37:51 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:13.353 * Looking for test storage... 00:05:13.353 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:13.353 14:37:51 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:13.353 14:37:51 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:13.353 14:37:51 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:13.353 14:37:51 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:13.353 14:37:51 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.353 14:37:51 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.353 14:37:51 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.353 14:37:51 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.353 14:37:51 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.353 14:37:51 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.353 14:37:51 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.353 14:37:51 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.353 14:37:51 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.353 14:37:51 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.353 14:37:51 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.353 14:37:51 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:13.353 14:37:51 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:13.353 14:37:51 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.353 14:37:51 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.353 14:37:51 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:13.353 14:37:51 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:13.353 14:37:51 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.353 14:37:51 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:13.353 14:37:51 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.353 14:37:51 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:13.353 14:37:51 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:13.353 14:37:51 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.353 14:37:51 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:13.353 14:37:51 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.353 14:37:51 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.353 14:37:51 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.353 14:37:51 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:13.353 14:37:51 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.353 14:37:51 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:13.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.353 --rc genhtml_branch_coverage=1 00:05:13.353 --rc genhtml_function_coverage=1 00:05:13.353 --rc genhtml_legend=1 00:05:13.353 --rc geninfo_all_blocks=1 00:05:13.353 --rc geninfo_unexecuted_blocks=1 00:05:13.353 00:05:13.353 ' 00:05:13.353 14:37:51 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:13.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.353 --rc genhtml_branch_coverage=1 00:05:13.353 --rc genhtml_function_coverage=1 00:05:13.353 --rc genhtml_legend=1 00:05:13.353 --rc geninfo_all_blocks=1 00:05:13.353 --rc geninfo_unexecuted_blocks=1 00:05:13.353 00:05:13.353 ' 00:05:13.353 14:37:51 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:13.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.353 --rc genhtml_branch_coverage=1 00:05:13.353 --rc genhtml_function_coverage=1 00:05:13.353 --rc genhtml_legend=1 00:05:13.353 --rc geninfo_all_blocks=1 00:05:13.353 --rc geninfo_unexecuted_blocks=1 00:05:13.353 00:05:13.353 ' 00:05:13.353 14:37:51 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:13.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.354 --rc genhtml_branch_coverage=1 00:05:13.354 --rc genhtml_function_coverage=1 00:05:13.354 --rc genhtml_legend=1 00:05:13.354 --rc geninfo_all_blocks=1 00:05:13.354 --rc geninfo_unexecuted_blocks=1 00:05:13.354 00:05:13.354 ' 00:05:13.354 14:37:51 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:13.354 14:37:51 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59562 00:05:13.354 14:37:51 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:13.354 14:37:51 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.354 14:37:51 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59562 00:05:13.354 14:37:51 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59562 ']' 00:05:13.354 14:37:51 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.354 14:37:51 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.354 14:37:51 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.354 14:37:51 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.354 14:37:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:13.614 [2024-12-09 14:37:51.545478] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:05:13.614 [2024-12-09 14:37:51.545628] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59562 ] 00:05:13.614 [2024-12-09 14:37:51.724156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:13.873 [2024-12-09 14:37:51.850545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.873 [2024-12-09 14:37:51.850701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.873 [2024-12-09 14:37:51.850837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:13.873 [2024-12-09 14:37:51.850896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:14.443 14:37:52 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.443 14:37:52 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:14.443 14:37:52 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:14.443 14:37:52 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.443 14:37:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:14.443 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:14.443 POWER: Cannot set governor of lcore 0 to userspace 00:05:14.443 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:14.443 POWER: Cannot set governor of lcore 0 to performance 00:05:14.443 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:14.443 POWER: Cannot set governor of lcore 0 to userspace 00:05:14.443 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:14.443 POWER: Cannot set governor of lcore 0 to userspace 00:05:14.443 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:14.443 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:14.443 POWER: Unable to set Power Management Environment for lcore 0 00:05:14.443 [2024-12-09 14:37:52.467381] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:14.443 [2024-12-09 14:37:52.467404] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:14.443 [2024-12-09 14:37:52.467414] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:14.443 [2024-12-09 14:37:52.467433] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:14.443 [2024-12-09 14:37:52.467441] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:14.443 [2024-12-09 14:37:52.467450] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:14.443 14:37:52 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.443 14:37:52 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:14.443 14:37:52 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.443 14:37:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:14.703 [2024-12-09 14:37:52.789338] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:14.703 14:37:52 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.703 14:37:52 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:14.703 14:37:52 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.703 14:37:52 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.703 14:37:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:14.703 ************************************ 00:05:14.703 START TEST scheduler_create_thread 00:05:14.703 ************************************ 00:05:14.703 14:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:14.703 14:37:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:14.703 14:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.703 14:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.703 2 00:05:14.703 14:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.703 14:37:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:14.703 14:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.703 14:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.964 3 00:05:14.964 14:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.964 14:37:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:14.964 14:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.964 14:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.964 4 00:05:14.964 14:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.964 14:37:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:14.964 14:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.964 14:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.964 5 00:05:14.964 14:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.964 14:37:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:14.964 14:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.964 14:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.964 6 00:05:14.964 14:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.964 14:37:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:14.964 14:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.964 14:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.964 7 00:05:14.964 14:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.964 14:37:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:14.964 14:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.964 14:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.964 8 00:05:14.964 14:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.964 14:37:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:14.964 14:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.964 14:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.964 9 00:05:14.964 14:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.964 14:37:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:14.964 14:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.964 14:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.964 10 00:05:14.964 14:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.964 14:37:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:14.964 14:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.964 14:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.964 14:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.964 14:37:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:14.964 14:37:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:14.964 14:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.964 14:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.532 14:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.532 14:37:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:15.532 14:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.532 14:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.911 14:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.911 14:37:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:16.911 14:37:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:16.911 14:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.911 14:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.847 14:37:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.847 00:05:17.847 real 0m3.099s 00:05:17.847 user 0m0.016s 00:05:17.847 sys 0m0.010s 00:05:17.847 14:37:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.847 14:37:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.847 ************************************ 00:05:17.847 END TEST scheduler_create_thread 00:05:17.847 ************************************ 00:05:17.847 14:37:55 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:17.847 14:37:55 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59562 00:05:17.847 14:37:55 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59562 ']' 00:05:17.847 14:37:55 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59562 00:05:17.847 14:37:55 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:18.106 14:37:55 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.106 14:37:55 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59562 00:05:18.106 14:37:56 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:18.106 14:37:56 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:18.106 killing process with pid 59562 00:05:18.106 14:37:56 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59562' 00:05:18.106 14:37:56 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59562 00:05:18.106 14:37:56 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59562 00:05:18.365 [2024-12-09 14:37:56.278483] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:19.752 00:05:19.752 real 0m6.430s 00:05:19.752 user 0m13.352s 00:05:19.752 sys 0m0.526s 00:05:19.752 14:37:57 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.752 14:37:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:19.752 ************************************ 00:05:19.752 END TEST event_scheduler 00:05:19.752 ************************************ 00:05:19.752 14:37:57 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:19.752 14:37:57 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:19.753 14:37:57 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.753 14:37:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.753 14:37:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:19.753 ************************************ 00:05:19.753 START TEST app_repeat 00:05:19.753 ************************************ 00:05:19.753 14:37:57 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:19.753 14:37:57 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.753 14:37:57 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.753 14:37:57 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:19.753 14:37:57 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.753 14:37:57 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:19.753 14:37:57 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:19.753 14:37:57 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:19.753 14:37:57 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59679 00:05:19.753 14:37:57 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:19.753 14:37:57 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:19.753 14:37:57 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59679' 00:05:19.753 Process app_repeat pid: 59679 00:05:19.753 14:37:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:19.753 spdk_app_start Round 0 00:05:19.753 14:37:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:19.753 14:37:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59679 /var/tmp/spdk-nbd.sock 00:05:19.753 14:37:57 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59679 ']' 00:05:19.753 14:37:57 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:19.753 14:37:57 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:19.753 14:37:57 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:19.753 14:37:57 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.753 14:37:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:19.753 [2024-12-09 14:37:57.790019] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:05:19.753 [2024-12-09 14:37:57.790155] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59679 ] 00:05:20.011 [2024-12-09 14:37:57.974199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:20.011 [2024-12-09 14:37:58.114422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.011 [2024-12-09 14:37:58.114433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.946 14:37:58 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.946 14:37:58 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:20.946 14:37:58 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.205 Malloc0 00:05:21.205 14:37:59 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.464 Malloc1 00:05:21.464 14:37:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.464 14:37:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.464 14:37:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.464 14:37:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:21.464 14:37:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.464 14:37:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:21.464 14:37:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.464 14:37:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.464 14:37:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.464 14:37:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:21.464 14:37:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.464 14:37:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:21.464 14:37:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:21.464 14:37:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:21.464 14:37:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.464 14:37:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:21.723 /dev/nbd0 00:05:21.723 14:37:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:21.723 14:37:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:21.723 14:37:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:21.723 14:37:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:21.723 14:37:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:21.723 14:37:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:21.723 14:37:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:21.723 14:37:59 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:21.723 14:37:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:21.723 14:37:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:21.723 14:37:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:21.723 1+0 records in 00:05:21.723 1+0 records out 00:05:21.723 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431871 s, 9.5 MB/s 00:05:21.723 14:37:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.723 14:37:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:21.723 14:37:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.723 14:37:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:21.723 14:37:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:21.723 14:37:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:21.723 14:37:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.723 14:37:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:21.982 /dev/nbd1 00:05:22.240 14:38:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:22.240 14:38:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:22.240 14:38:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:22.240 14:38:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:22.240 14:38:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:22.240 14:38:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:22.240 14:38:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:22.240 14:38:00 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:22.240 14:38:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:22.240 14:38:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:22.240 14:38:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.240 1+0 records in 00:05:22.240 1+0 records out 00:05:22.240 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000480227 s, 8.5 MB/s 00:05:22.240 14:38:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:22.240 14:38:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:22.240 14:38:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:22.240 14:38:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:22.240 14:38:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:22.240 14:38:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.240 14:38:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.240 14:38:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.240 14:38:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.240 14:38:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.498 14:38:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:22.498 { 00:05:22.498 "nbd_device": "/dev/nbd0", 00:05:22.498 "bdev_name": "Malloc0" 00:05:22.498 }, 00:05:22.498 { 00:05:22.498 "nbd_device": "/dev/nbd1", 00:05:22.498 "bdev_name": "Malloc1" 00:05:22.498 } 00:05:22.498 ]' 00:05:22.498 14:38:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.498 14:38:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:22.498 { 00:05:22.498 "nbd_device": "/dev/nbd0", 00:05:22.498 "bdev_name": "Malloc0" 00:05:22.498 }, 00:05:22.498 { 00:05:22.498 "nbd_device": "/dev/nbd1", 00:05:22.498 "bdev_name": "Malloc1" 00:05:22.498 } 00:05:22.498 ]' 00:05:22.498 14:38:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:22.498 /dev/nbd1' 00:05:22.498 14:38:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.498 14:38:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:22.498 /dev/nbd1' 00:05:22.498 14:38:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:22.498 14:38:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:22.498 14:38:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:22.498 14:38:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:22.498 14:38:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:22.498 14:38:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.498 14:38:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.498 14:38:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:22.498 14:38:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.498 14:38:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:22.498 14:38:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:22.498 256+0 records in 00:05:22.498 256+0 records out 00:05:22.498 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00608151 s, 172 MB/s 00:05:22.498 14:38:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.498 14:38:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:22.498 256+0 records in 00:05:22.498 256+0 records out 00:05:22.498 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0267841 s, 39.1 MB/s 00:05:22.498 14:38:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.498 14:38:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:22.498 256+0 records in 00:05:22.498 256+0 records out 00:05:22.498 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0279708 s, 37.5 MB/s 00:05:22.498 14:38:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:22.498 14:38:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.498 14:38:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.498 14:38:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:22.498 14:38:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.498 14:38:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:22.498 14:38:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:22.498 14:38:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.498 14:38:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:22.498 14:38:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.498 14:38:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:22.498 14:38:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.498 14:38:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:22.498 14:38:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.498 14:38:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.498 14:38:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:22.498 14:38:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:22.498 14:38:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.498 14:38:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:22.756 14:38:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:22.756 14:38:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:22.756 14:38:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:22.756 14:38:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.756 14:38:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.756 14:38:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:22.756 14:38:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:22.756 14:38:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.756 14:38:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.756 14:38:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:23.014 14:38:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:23.014 14:38:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:23.014 14:38:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:23.014 14:38:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.014 14:38:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.014 14:38:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:23.014 14:38:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:23.014 14:38:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.014 14:38:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.014 14:38:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.014 14:38:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.273 14:38:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:23.273 14:38:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.273 14:38:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:23.531 14:38:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:23.531 14:38:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:23.531 14:38:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.531 14:38:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:23.531 14:38:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:23.531 14:38:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:23.531 14:38:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:23.531 14:38:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:23.531 14:38:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:23.531 14:38:01 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:24.097 14:38:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:25.474 [2024-12-09 14:38:03.285266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:25.474 [2024-12-09 14:38:03.417471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.474 [2024-12-09 14:38:03.417472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.733 [2024-12-09 14:38:03.641045] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:25.733 [2024-12-09 14:38:03.641157] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:27.109 spdk_app_start Round 1 00:05:27.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:27.109 14:38:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:27.109 14:38:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:27.109 14:38:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59679 /var/tmp/spdk-nbd.sock 00:05:27.109 14:38:04 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59679 ']' 00:05:27.109 14:38:04 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:27.109 14:38:04 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.109 14:38:04 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:27.109 14:38:04 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.109 14:38:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:27.109 14:38:05 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.109 14:38:05 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:27.109 14:38:05 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.368 Malloc0 00:05:27.368 14:38:05 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.627 Malloc1 00:05:27.627 14:38:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.627 14:38:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.627 14:38:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.627 14:38:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:27.627 14:38:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.627 14:38:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:27.627 14:38:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.627 14:38:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.627 14:38:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.627 14:38:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:27.627 14:38:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.627 14:38:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:27.627 14:38:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:27.627 14:38:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:27.627 14:38:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.627 14:38:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:27.885 /dev/nbd0 00:05:27.885 14:38:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:27.885 14:38:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:27.885 14:38:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:27.885 14:38:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:27.885 14:38:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:27.885 14:38:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:27.885 14:38:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:27.885 14:38:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:27.885 14:38:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:27.885 14:38:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:27.885 14:38:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.885 1+0 records in 00:05:27.885 1+0 records out 00:05:27.885 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433415 s, 9.5 MB/s 00:05:27.885 14:38:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.885 14:38:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:27.885 14:38:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.885 14:38:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:27.885 14:38:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:27.885 14:38:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.885 14:38:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.885 14:38:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:28.144 /dev/nbd1 00:05:28.144 14:38:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:28.144 14:38:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:28.144 14:38:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:28.144 14:38:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:28.144 14:38:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:28.144 14:38:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:28.144 14:38:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:28.144 14:38:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:28.144 14:38:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:28.144 14:38:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:28.144 14:38:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:28.144 1+0 records in 00:05:28.144 1+0 records out 00:05:28.144 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000380372 s, 10.8 MB/s 00:05:28.144 14:38:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.144 14:38:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:28.144 14:38:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.144 14:38:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:28.144 14:38:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:28.144 14:38:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:28.144 14:38:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.144 14:38:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.144 14:38:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.144 14:38:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.404 14:38:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:28.404 { 00:05:28.404 "nbd_device": "/dev/nbd0", 00:05:28.404 "bdev_name": "Malloc0" 00:05:28.404 }, 00:05:28.404 { 00:05:28.404 "nbd_device": "/dev/nbd1", 00:05:28.404 "bdev_name": "Malloc1" 00:05:28.404 } 00:05:28.404 ]' 00:05:28.404 14:38:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:28.404 { 00:05:28.404 "nbd_device": "/dev/nbd0", 00:05:28.404 "bdev_name": "Malloc0" 00:05:28.404 }, 00:05:28.404 { 00:05:28.404 "nbd_device": "/dev/nbd1", 00:05:28.404 "bdev_name": "Malloc1" 00:05:28.404 } 00:05:28.404 ]' 00:05:28.404 14:38:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.664 14:38:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:28.664 /dev/nbd1' 00:05:28.664 14:38:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:28.664 /dev/nbd1' 00:05:28.664 14:38:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.664 14:38:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:28.664 14:38:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:28.664 14:38:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:28.664 14:38:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:28.664 14:38:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:28.664 14:38:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.664 14:38:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.664 14:38:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:28.664 14:38:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.664 14:38:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:28.664 14:38:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:28.664 256+0 records in 00:05:28.664 256+0 records out 00:05:28.664 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146325 s, 71.7 MB/s 00:05:28.664 14:38:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.664 14:38:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:28.664 256+0 records in 00:05:28.664 256+0 records out 00:05:28.664 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259332 s, 40.4 MB/s 00:05:28.664 14:38:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.664 14:38:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:28.664 256+0 records in 00:05:28.664 256+0 records out 00:05:28.664 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0263214 s, 39.8 MB/s 00:05:28.664 14:38:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:28.664 14:38:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.664 14:38:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.664 14:38:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:28.664 14:38:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.664 14:38:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:28.664 14:38:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:28.664 14:38:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.664 14:38:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:28.664 14:38:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.664 14:38:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:28.665 14:38:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.665 14:38:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:28.665 14:38:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.665 14:38:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.665 14:38:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:28.665 14:38:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:28.665 14:38:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.665 14:38:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:28.924 14:38:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:28.924 14:38:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:28.924 14:38:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:28.924 14:38:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.924 14:38:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.924 14:38:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:28.924 14:38:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:28.924 14:38:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.924 14:38:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.924 14:38:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:29.184 14:38:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:29.184 14:38:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:29.184 14:38:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:29.184 14:38:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.184 14:38:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.184 14:38:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:29.184 14:38:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:29.184 14:38:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.184 14:38:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:29.184 14:38:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.184 14:38:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:29.444 14:38:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:29.444 14:38:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:29.444 14:38:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:29.444 14:38:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:29.444 14:38:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:29.444 14:38:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:29.444 14:38:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:29.444 14:38:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:29.444 14:38:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:29.444 14:38:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:29.444 14:38:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:29.444 14:38:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:29.444 14:38:07 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:29.702 14:38:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:31.087 [2024-12-09 14:38:08.971491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:31.087 [2024-12-09 14:38:09.079468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.087 [2024-12-09 14:38:09.079493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.354 [2024-12-09 14:38:09.277629] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:31.354 [2024-12-09 14:38:09.277796] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:32.734 14:38:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:32.734 14:38:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:32.734 spdk_app_start Round 2 00:05:32.734 14:38:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59679 /var/tmp/spdk-nbd.sock 00:05:32.734 14:38:10 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59679 ']' 00:05:32.734 14:38:10 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:32.734 14:38:10 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.734 14:38:10 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:32.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:32.734 14:38:10 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.734 14:38:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:32.993 14:38:11 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.993 14:38:11 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:32.993 14:38:11 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.252 Malloc0 00:05:33.252 14:38:11 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.511 Malloc1 00:05:33.772 14:38:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.772 14:38:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.772 14:38:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.772 14:38:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:33.772 14:38:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.772 14:38:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:33.772 14:38:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.772 14:38:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.772 14:38:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.772 14:38:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:33.772 14:38:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.772 14:38:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:33.772 14:38:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:33.772 14:38:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:33.772 14:38:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.772 14:38:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:33.772 /dev/nbd0 00:05:34.032 14:38:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:34.032 14:38:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:34.032 14:38:11 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:34.032 14:38:11 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:34.032 14:38:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:34.032 14:38:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:34.032 14:38:11 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:34.032 14:38:11 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:34.032 14:38:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:34.032 14:38:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:34.032 14:38:11 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:34.032 1+0 records in 00:05:34.032 1+0 records out 00:05:34.032 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274541 s, 14.9 MB/s 00:05:34.032 14:38:11 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:34.032 14:38:11 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:34.032 14:38:11 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:34.032 14:38:11 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:34.032 14:38:11 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:34.032 14:38:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:34.032 14:38:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.032 14:38:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:34.293 /dev/nbd1 00:05:34.293 14:38:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:34.293 14:38:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:34.293 14:38:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:34.293 14:38:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:34.293 14:38:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:34.293 14:38:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:34.293 14:38:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:34.293 14:38:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:34.293 14:38:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:34.293 14:38:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:34.293 14:38:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:34.293 1+0 records in 00:05:34.293 1+0 records out 00:05:34.293 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290729 s, 14.1 MB/s 00:05:34.293 14:38:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:34.293 14:38:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:34.293 14:38:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:34.293 14:38:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:34.293 14:38:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:34.293 14:38:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:34.293 14:38:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.293 14:38:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:34.293 14:38:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.293 14:38:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.554 14:38:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:34.554 { 00:05:34.554 "nbd_device": "/dev/nbd0", 00:05:34.554 "bdev_name": "Malloc0" 00:05:34.554 }, 00:05:34.554 { 00:05:34.554 "nbd_device": "/dev/nbd1", 00:05:34.554 "bdev_name": "Malloc1" 00:05:34.554 } 00:05:34.554 ]' 00:05:34.554 14:38:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:34.554 { 00:05:34.554 "nbd_device": "/dev/nbd0", 00:05:34.554 "bdev_name": "Malloc0" 00:05:34.554 }, 00:05:34.554 { 00:05:34.554 "nbd_device": "/dev/nbd1", 00:05:34.554 "bdev_name": "Malloc1" 00:05:34.554 } 00:05:34.554 ]' 00:05:34.554 14:38:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.554 14:38:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:34.554 /dev/nbd1' 00:05:34.554 14:38:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.554 14:38:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:34.554 /dev/nbd1' 00:05:34.554 14:38:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:34.554 14:38:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:34.554 14:38:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:34.554 14:38:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:34.554 14:38:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:34.554 14:38:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.554 14:38:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.554 14:38:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:34.554 14:38:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:34.554 14:38:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:34.554 14:38:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:34.554 256+0 records in 00:05:34.554 256+0 records out 00:05:34.554 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141606 s, 74.0 MB/s 00:05:34.554 14:38:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.554 14:38:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:34.554 256+0 records in 00:05:34.554 256+0 records out 00:05:34.554 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253216 s, 41.4 MB/s 00:05:34.554 14:38:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.554 14:38:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:34.554 256+0 records in 00:05:34.554 256+0 records out 00:05:34.554 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.026796 s, 39.1 MB/s 00:05:34.554 14:38:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:34.554 14:38:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.554 14:38:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.554 14:38:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:34.554 14:38:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:34.554 14:38:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:34.555 14:38:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:34.555 14:38:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.555 14:38:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:34.555 14:38:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.555 14:38:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:34.555 14:38:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:34.555 14:38:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:34.555 14:38:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.555 14:38:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.555 14:38:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:34.555 14:38:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:34.555 14:38:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.555 14:38:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:34.815 14:38:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:34.815 14:38:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:34.815 14:38:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:34.815 14:38:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.815 14:38:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.815 14:38:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:34.815 14:38:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:34.815 14:38:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.815 14:38:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.815 14:38:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:35.075 14:38:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:35.075 14:38:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:35.075 14:38:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:35.075 14:38:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:35.075 14:38:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:35.075 14:38:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:35.075 14:38:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:35.075 14:38:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:35.075 14:38:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:35.075 14:38:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.075 14:38:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:35.335 14:38:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:35.335 14:38:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:35.335 14:38:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:35.335 14:38:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:35.335 14:38:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:35.335 14:38:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:35.335 14:38:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:35.335 14:38:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:35.335 14:38:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:35.335 14:38:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:35.335 14:38:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:35.335 14:38:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:35.335 14:38:13 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:35.906 14:38:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:37.287 [2024-12-09 14:38:15.076240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.287 [2024-12-09 14:38:15.196016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.287 [2024-12-09 14:38:15.196018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.287 [2024-12-09 14:38:15.403596] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:37.287 [2024-12-09 14:38:15.403822] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:38.754 14:38:16 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59679 /var/tmp/spdk-nbd.sock 00:05:38.754 14:38:16 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59679 ']' 00:05:38.754 14:38:16 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:38.754 14:38:16 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.754 14:38:16 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:38.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:38.754 14:38:16 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.754 14:38:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:39.013 14:38:17 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.013 14:38:17 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:39.013 14:38:17 event.app_repeat -- event/event.sh@39 -- # killprocess 59679 00:05:39.013 14:38:17 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59679 ']' 00:05:39.013 14:38:17 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59679 00:05:39.013 14:38:17 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:39.013 14:38:17 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.013 14:38:17 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59679 00:05:39.013 14:38:17 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.013 14:38:17 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.013 14:38:17 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59679' 00:05:39.013 killing process with pid 59679 00:05:39.013 14:38:17 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59679 00:05:39.013 14:38:17 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59679 00:05:40.414 spdk_app_start is called in Round 0. 00:05:40.414 Shutdown signal received, stop current app iteration 00:05:40.414 Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 reinitialization... 00:05:40.414 spdk_app_start is called in Round 1. 00:05:40.414 Shutdown signal received, stop current app iteration 00:05:40.414 Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 reinitialization... 00:05:40.414 spdk_app_start is called in Round 2. 00:05:40.414 Shutdown signal received, stop current app iteration 00:05:40.414 Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 reinitialization... 00:05:40.414 spdk_app_start is called in Round 3. 00:05:40.414 Shutdown signal received, stop current app iteration 00:05:40.414 14:38:18 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:40.414 14:38:18 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:40.414 00:05:40.414 real 0m20.527s 00:05:40.414 user 0m44.433s 00:05:40.414 sys 0m2.990s 00:05:40.414 14:38:18 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.414 14:38:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.414 ************************************ 00:05:40.414 END TEST app_repeat 00:05:40.414 ************************************ 00:05:40.414 14:38:18 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:40.414 14:38:18 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:40.414 14:38:18 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.414 14:38:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.414 14:38:18 event -- common/autotest_common.sh@10 -- # set +x 00:05:40.414 ************************************ 00:05:40.414 START TEST cpu_locks 00:05:40.414 ************************************ 00:05:40.414 14:38:18 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:40.414 * Looking for test storage... 00:05:40.414 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:40.414 14:38:18 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:40.414 14:38:18 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:40.414 14:38:18 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:40.414 14:38:18 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:40.414 14:38:18 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.414 14:38:18 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.414 14:38:18 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.414 14:38:18 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.414 14:38:18 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.414 14:38:18 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.414 14:38:18 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.414 14:38:18 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.414 14:38:18 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.414 14:38:18 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.414 14:38:18 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.414 14:38:18 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:40.414 14:38:18 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:40.414 14:38:18 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.414 14:38:18 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.414 14:38:18 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:40.414 14:38:18 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:40.414 14:38:18 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.414 14:38:18 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:40.414 14:38:18 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.414 14:38:18 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:40.414 14:38:18 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:40.414 14:38:18 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.414 14:38:18 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:40.414 14:38:18 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.414 14:38:18 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.414 14:38:18 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.414 14:38:18 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:40.414 14:38:18 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.414 14:38:18 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:40.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.414 --rc genhtml_branch_coverage=1 00:05:40.414 --rc genhtml_function_coverage=1 00:05:40.414 --rc genhtml_legend=1 00:05:40.414 --rc geninfo_all_blocks=1 00:05:40.414 --rc geninfo_unexecuted_blocks=1 00:05:40.414 00:05:40.414 ' 00:05:40.414 14:38:18 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:40.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.414 --rc genhtml_branch_coverage=1 00:05:40.414 --rc genhtml_function_coverage=1 00:05:40.414 --rc genhtml_legend=1 00:05:40.414 --rc geninfo_all_blocks=1 00:05:40.414 --rc geninfo_unexecuted_blocks=1 00:05:40.414 00:05:40.414 ' 00:05:40.414 14:38:18 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:40.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.414 --rc genhtml_branch_coverage=1 00:05:40.414 --rc genhtml_function_coverage=1 00:05:40.414 --rc genhtml_legend=1 00:05:40.414 --rc geninfo_all_blocks=1 00:05:40.414 --rc geninfo_unexecuted_blocks=1 00:05:40.414 00:05:40.414 ' 00:05:40.414 14:38:18 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:40.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.414 --rc genhtml_branch_coverage=1 00:05:40.414 --rc genhtml_function_coverage=1 00:05:40.414 --rc genhtml_legend=1 00:05:40.414 --rc geninfo_all_blocks=1 00:05:40.414 --rc geninfo_unexecuted_blocks=1 00:05:40.414 00:05:40.414 ' 00:05:40.414 14:38:18 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:40.414 14:38:18 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:40.414 14:38:18 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:40.414 14:38:18 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:40.414 14:38:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.414 14:38:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.414 14:38:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.414 ************************************ 00:05:40.414 START TEST default_locks 00:05:40.414 ************************************ 00:05:40.414 14:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:40.414 14:38:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60134 00:05:40.414 14:38:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.414 14:38:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60134 00:05:40.414 14:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60134 ']' 00:05:40.414 14:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.414 14:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.414 14:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.414 14:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.414 14:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.674 [2024-12-09 14:38:18.612178] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:05:40.674 [2024-12-09 14:38:18.612401] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60134 ] 00:05:40.674 [2024-12-09 14:38:18.772627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.934 [2024-12-09 14:38:18.909892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.891 14:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.891 14:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:41.891 14:38:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60134 00:05:41.891 14:38:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60134 00:05:41.891 14:38:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:42.150 14:38:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60134 00:05:42.151 14:38:20 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 60134 ']' 00:05:42.151 14:38:20 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 60134 00:05:42.151 14:38:20 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:42.151 14:38:20 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.151 14:38:20 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60134 00:05:42.410 14:38:20 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.410 killing process with pid 60134 00:05:42.411 14:38:20 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.411 14:38:20 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60134' 00:05:42.411 14:38:20 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 60134 00:05:42.411 14:38:20 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 60134 00:05:44.952 14:38:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60134 00:05:44.952 14:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:44.952 14:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60134 00:05:44.952 14:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:44.952 14:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.952 14:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:44.952 14:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.952 14:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 60134 00:05:44.952 14:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60134 ']' 00:05:44.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.952 14:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.952 14:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.952 14:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.952 ERROR: process (pid: 60134) is no longer running 00:05:44.952 14:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.952 14:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.952 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60134) - No such process 00:05:44.952 14:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.952 14:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:44.952 14:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:44.952 14:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:44.952 14:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:44.952 14:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:44.952 14:38:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:44.952 14:38:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:44.952 14:38:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:44.952 14:38:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:44.952 00:05:44.952 real 0m4.303s 00:05:44.952 user 0m4.287s 00:05:44.952 sys 0m0.632s 00:05:44.952 14:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.952 14:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.952 ************************************ 00:05:44.952 END TEST default_locks 00:05:44.952 ************************************ 00:05:44.952 14:38:22 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:44.952 14:38:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.952 14:38:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.952 14:38:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.952 ************************************ 00:05:44.952 START TEST default_locks_via_rpc 00:05:44.952 ************************************ 00:05:44.952 14:38:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:44.952 14:38:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60209 00:05:44.952 14:38:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:44.952 14:38:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60209 00:05:44.952 14:38:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60209 ']' 00:05:44.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.952 14:38:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.952 14:38:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.952 14:38:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.952 14:38:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.952 14:38:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.952 [2024-12-09 14:38:22.984519] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:05:44.952 [2024-12-09 14:38:22.984777] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60209 ] 00:05:45.212 [2024-12-09 14:38:23.164449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.212 [2024-12-09 14:38:23.291796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.152 14:38:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.152 14:38:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:46.152 14:38:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:46.152 14:38:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.152 14:38:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.152 14:38:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.152 14:38:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:46.152 14:38:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:46.152 14:38:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:46.152 14:38:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:46.152 14:38:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:46.152 14:38:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.152 14:38:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.420 14:38:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.420 14:38:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60209 00:05:46.420 14:38:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60209 00:05:46.420 14:38:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:46.420 14:38:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60209 00:05:46.420 14:38:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60209 ']' 00:05:46.420 14:38:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60209 00:05:46.420 14:38:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:46.420 14:38:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.420 14:38:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60209 00:05:46.696 killing process with pid 60209 00:05:46.696 14:38:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.696 14:38:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.696 14:38:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60209' 00:05:46.696 14:38:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60209 00:05:46.696 14:38:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60209 00:05:49.237 ************************************ 00:05:49.237 END TEST default_locks_via_rpc 00:05:49.237 ************************************ 00:05:49.237 00:05:49.237 real 0m4.107s 00:05:49.237 user 0m4.071s 00:05:49.237 sys 0m0.631s 00:05:49.237 14:38:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.237 14:38:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.237 14:38:27 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:49.237 14:38:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.237 14:38:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.237 14:38:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.237 ************************************ 00:05:49.237 START TEST non_locking_app_on_locked_coremask 00:05:49.237 ************************************ 00:05:49.237 14:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:49.237 14:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60283 00:05:49.237 14:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.237 14:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60283 /var/tmp/spdk.sock 00:05:49.237 14:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60283 ']' 00:05:49.237 14:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.237 14:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.237 14:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.237 14:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.237 14:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.237 [2024-12-09 14:38:27.155862] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:05:49.237 [2024-12-09 14:38:27.156097] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60283 ] 00:05:49.237 [2024-12-09 14:38:27.336661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.497 [2024-12-09 14:38:27.457140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.438 14:38:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.438 14:38:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:50.438 14:38:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60310 00:05:50.438 14:38:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60310 /var/tmp/spdk2.sock 00:05:50.438 14:38:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60310 ']' 00:05:50.439 14:38:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.439 14:38:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:50.439 14:38:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.439 14:38:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.439 14:38:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.439 14:38:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.439 [2024-12-09 14:38:28.441382] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:05:50.439 [2024-12-09 14:38:28.441615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60310 ] 00:05:50.700 [2024-12-09 14:38:28.610407] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:50.700 [2024-12-09 14:38:28.610484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.961 [2024-12-09 14:38:28.854157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.499 14:38:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.499 14:38:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:53.499 14:38:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60283 00:05:53.499 14:38:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60283 00:05:53.499 14:38:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.499 14:38:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60283 00:05:53.499 14:38:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60283 ']' 00:05:53.499 14:38:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60283 00:05:53.499 14:38:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:53.499 14:38:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.499 14:38:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60283 00:05:53.499 killing process with pid 60283 00:05:53.499 14:38:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.499 14:38:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.499 14:38:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60283' 00:05:53.500 14:38:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60283 00:05:53.500 14:38:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60283 00:05:58.778 14:38:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60310 00:05:58.779 14:38:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60310 ']' 00:05:58.779 14:38:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60310 00:05:58.779 14:38:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:58.779 14:38:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.779 14:38:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60310 00:05:58.779 killing process with pid 60310 00:05:58.779 14:38:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.779 14:38:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.779 14:38:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60310' 00:05:58.779 14:38:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60310 00:05:58.779 14:38:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60310 00:06:01.318 00:06:01.318 real 0m11.874s 00:06:01.318 user 0m12.124s 00:06:01.318 sys 0m1.270s 00:06:01.318 14:38:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.318 14:38:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.318 ************************************ 00:06:01.318 END TEST non_locking_app_on_locked_coremask 00:06:01.318 ************************************ 00:06:01.318 14:38:38 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:01.318 14:38:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.318 14:38:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.318 14:38:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.318 ************************************ 00:06:01.318 START TEST locking_app_on_unlocked_coremask 00:06:01.318 ************************************ 00:06:01.318 14:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:01.318 14:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60464 00:06:01.318 14:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:01.318 14:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60464 /var/tmp/spdk.sock 00:06:01.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.318 14:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60464 ']' 00:06:01.318 14:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.318 14:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.318 14:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.318 14:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.318 14:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.318 [2024-12-09 14:38:39.094730] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:06:01.318 [2024-12-09 14:38:39.094873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60464 ] 00:06:01.318 [2024-12-09 14:38:39.267466] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:01.318 [2024-12-09 14:38:39.267639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.318 [2024-12-09 14:38:39.392102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.258 14:38:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.258 14:38:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:02.258 14:38:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60480 00:06:02.258 14:38:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60480 /var/tmp/spdk2.sock 00:06:02.258 14:38:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:02.258 14:38:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60480 ']' 00:06:02.258 14:38:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.258 14:38:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.258 14:38:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.259 14:38:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.259 14:38:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.519 [2024-12-09 14:38:40.438946] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:06:02.519 [2024-12-09 14:38:40.439206] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60480 ] 00:06:02.519 [2024-12-09 14:38:40.614227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.778 [2024-12-09 14:38:40.861265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.320 14:38:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.320 14:38:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:05.320 14:38:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60480 00:06:05.320 14:38:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:05.320 14:38:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60480 00:06:05.320 14:38:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60464 00:06:05.320 14:38:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60464 ']' 00:06:05.320 14:38:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60464 00:06:05.320 14:38:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:05.320 14:38:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.320 14:38:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60464 00:06:05.580 killing process with pid 60464 00:06:05.580 14:38:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.580 14:38:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.580 14:38:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60464' 00:06:05.580 14:38:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60464 00:06:05.580 14:38:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60464 00:06:10.874 14:38:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60480 00:06:10.874 14:38:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60480 ']' 00:06:10.874 14:38:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60480 00:06:10.874 14:38:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:10.874 14:38:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.874 14:38:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60480 00:06:10.874 killing process with pid 60480 00:06:10.874 14:38:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.874 14:38:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.874 14:38:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60480' 00:06:10.874 14:38:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60480 00:06:10.875 14:38:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60480 00:06:12.789 00:06:12.789 real 0m11.867s 00:06:12.789 user 0m12.179s 00:06:12.789 sys 0m1.241s 00:06:12.789 ************************************ 00:06:12.789 END TEST locking_app_on_unlocked_coremask 00:06:12.789 ************************************ 00:06:12.789 14:38:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.789 14:38:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.048 14:38:50 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:13.048 14:38:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.048 14:38:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.048 14:38:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.048 ************************************ 00:06:13.048 START TEST locking_app_on_locked_coremask 00:06:13.048 ************************************ 00:06:13.048 14:38:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:13.048 14:38:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60630 00:06:13.048 14:38:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.048 14:38:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60630 /var/tmp/spdk.sock 00:06:13.048 14:38:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60630 ']' 00:06:13.048 14:38:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.048 14:38:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.048 14:38:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.048 14:38:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.048 14:38:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.048 [2024-12-09 14:38:51.018805] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:06:13.048 [2024-12-09 14:38:51.019006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60630 ] 00:06:13.307 [2024-12-09 14:38:51.195427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.307 [2024-12-09 14:38:51.317243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.245 14:38:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.245 14:38:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:14.245 14:38:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60650 00:06:14.245 14:38:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:14.245 14:38:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60650 /var/tmp/spdk2.sock 00:06:14.245 14:38:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:14.245 14:38:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60650 /var/tmp/spdk2.sock 00:06:14.245 14:38:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:14.245 14:38:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.245 14:38:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:14.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.245 14:38:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.245 14:38:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60650 /var/tmp/spdk2.sock 00:06:14.245 14:38:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60650 ']' 00:06:14.245 14:38:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.245 14:38:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.245 14:38:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.245 14:38:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.245 14:38:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.245 [2024-12-09 14:38:52.334199] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:06:14.245 [2024-12-09 14:38:52.334428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60650 ] 00:06:14.504 [2024-12-09 14:38:52.504933] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60630 has claimed it. 00:06:14.504 [2024-12-09 14:38:52.504993] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:15.073 ERROR: process (pid: 60650) is no longer running 00:06:15.073 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60650) - No such process 00:06:15.073 14:38:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.073 14:38:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:15.073 14:38:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:15.073 14:38:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:15.073 14:38:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:15.073 14:38:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:15.073 14:38:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60630 00:06:15.073 14:38:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60630 00:06:15.073 14:38:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.333 14:38:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60630 00:06:15.333 14:38:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60630 ']' 00:06:15.333 14:38:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60630 00:06:15.333 14:38:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:15.333 14:38:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.333 14:38:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60630 00:06:15.333 killing process with pid 60630 00:06:15.333 14:38:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.333 14:38:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.333 14:38:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60630' 00:06:15.333 14:38:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60630 00:06:15.333 14:38:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60630 00:06:17.872 ************************************ 00:06:17.872 END TEST locking_app_on_locked_coremask 00:06:17.872 ************************************ 00:06:17.872 00:06:17.872 real 0m4.975s 00:06:17.872 user 0m5.174s 00:06:17.872 sys 0m0.814s 00:06:17.872 14:38:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.872 14:38:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.872 14:38:55 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:17.872 14:38:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.872 14:38:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.872 14:38:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.872 ************************************ 00:06:17.872 START TEST locking_overlapped_coremask 00:06:17.872 ************************************ 00:06:17.872 14:38:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:17.872 14:38:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60719 00:06:17.872 14:38:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:17.872 14:38:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60719 /var/tmp/spdk.sock 00:06:17.872 14:38:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60719 ']' 00:06:17.872 14:38:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.872 14:38:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.872 14:38:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.872 14:38:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.872 14:38:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.135 [2024-12-09 14:38:56.066299] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:06:18.135 [2024-12-09 14:38:56.066418] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60719 ] 00:06:18.135 [2024-12-09 14:38:56.243687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:18.395 [2024-12-09 14:38:56.362279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.395 [2024-12-09 14:38:56.362418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.395 [2024-12-09 14:38:56.362458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.346 14:38:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.346 14:38:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:19.346 14:38:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60743 00:06:19.346 14:38:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60743 /var/tmp/spdk2.sock 00:06:19.346 14:38:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:19.346 14:38:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60743 /var/tmp/spdk2.sock 00:06:19.346 14:38:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:19.346 14:38:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:19.346 14:38:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.346 14:38:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:19.346 14:38:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.346 14:38:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60743 /var/tmp/spdk2.sock 00:06:19.346 14:38:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60743 ']' 00:06:19.346 14:38:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.346 14:38:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.346 14:38:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.346 14:38:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.346 14:38:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.347 [2024-12-09 14:38:57.370203] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:06:19.347 [2024-12-09 14:38:57.370434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60743 ] 00:06:19.606 [2024-12-09 14:38:57.556249] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60719 has claimed it. 00:06:19.606 [2024-12-09 14:38:57.556338] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:20.177 ERROR: process (pid: 60743) is no longer running 00:06:20.177 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60743) - No such process 00:06:20.177 14:38:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.177 14:38:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:20.177 14:38:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:20.177 14:38:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:20.177 14:38:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:20.177 14:38:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:20.177 14:38:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:20.177 14:38:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:20.177 14:38:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:20.177 14:38:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:20.177 14:38:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60719 00:06:20.177 14:38:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60719 ']' 00:06:20.177 14:38:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60719 00:06:20.177 14:38:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:20.177 14:38:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.177 14:38:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60719 00:06:20.177 14:38:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:20.177 14:38:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:20.177 14:38:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60719' 00:06:20.177 killing process with pid 60719 00:06:20.178 14:38:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60719 00:06:20.178 14:38:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60719 00:06:22.717 00:06:22.717 real 0m4.534s 00:06:22.717 user 0m12.338s 00:06:22.717 sys 0m0.594s 00:06:22.717 ************************************ 00:06:22.717 END TEST locking_overlapped_coremask 00:06:22.717 ************************************ 00:06:22.717 14:39:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.717 14:39:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.717 14:39:00 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:22.717 14:39:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.717 14:39:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.717 14:39:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.717 ************************************ 00:06:22.717 START TEST locking_overlapped_coremask_via_rpc 00:06:22.717 ************************************ 00:06:22.717 14:39:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:22.717 14:39:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60807 00:06:22.717 14:39:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:22.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.717 14:39:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60807 /var/tmp/spdk.sock 00:06:22.717 14:39:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60807 ']' 00:06:22.717 14:39:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.717 14:39:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.717 14:39:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.717 14:39:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.717 14:39:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.717 [2024-12-09 14:39:00.652982] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:06:22.717 [2024-12-09 14:39:00.653113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60807 ] 00:06:22.717 [2024-12-09 14:39:00.826667] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:22.717 [2024-12-09 14:39:00.826719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:22.976 [2024-12-09 14:39:00.955542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.976 [2024-12-09 14:39:00.955677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.976 [2024-12-09 14:39:00.955714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.913 14:39:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.913 14:39:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:23.913 14:39:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60825 00:06:23.913 14:39:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60825 /var/tmp/spdk2.sock 00:06:23.913 14:39:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:23.913 14:39:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60825 ']' 00:06:23.913 14:39:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:23.913 14:39:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.913 14:39:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:23.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:23.913 14:39:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.913 14:39:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.173 [2024-12-09 14:39:02.060684] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:06:24.173 [2024-12-09 14:39:02.060979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60825 ] 00:06:24.173 [2024-12-09 14:39:02.272308] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:24.173 [2024-12-09 14:39:02.272380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:24.433 [2024-12-09 14:39:02.525401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:24.433 [2024-12-09 14:39:02.525552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.433 [2024-12-09 14:39:02.525630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:26.981 14:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.981 14:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:26.981 14:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:26.981 14:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.981 14:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.981 14:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.981 14:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:26.981 14:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:26.981 14:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:26.981 14:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:26.981 14:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.981 14:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:26.981 14:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.981 14:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:26.981 14:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.981 14:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.981 [2024-12-09 14:39:04.707775] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60807 has claimed it. 00:06:26.981 request: 00:06:26.981 { 00:06:26.981 "method": "framework_enable_cpumask_locks", 00:06:26.981 "req_id": 1 00:06:26.981 } 00:06:26.981 Got JSON-RPC error response 00:06:26.981 response: 00:06:26.981 { 00:06:26.981 "code": -32603, 00:06:26.981 "message": "Failed to claim CPU core: 2" 00:06:26.981 } 00:06:26.981 14:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:26.981 14:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:26.981 14:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:26.981 14:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:26.981 14:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:26.981 14:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60807 /var/tmp/spdk.sock 00:06:26.981 14:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60807 ']' 00:06:26.981 14:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.981 14:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.981 14:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.981 14:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.981 14:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.981 14:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.981 14:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:26.981 14:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60825 /var/tmp/spdk2.sock 00:06:26.981 14:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60825 ']' 00:06:26.981 14:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.981 14:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.981 14:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.981 14:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.981 14:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.241 14:39:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.241 14:39:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:27.241 14:39:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:27.241 14:39:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:27.241 14:39:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:27.241 14:39:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:27.241 00:06:27.241 real 0m4.619s 00:06:27.241 user 0m1.431s 00:06:27.241 sys 0m0.202s 00:06:27.241 14:39:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.241 14:39:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.241 ************************************ 00:06:27.241 END TEST locking_overlapped_coremask_via_rpc 00:06:27.241 ************************************ 00:06:27.241 14:39:05 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:27.241 14:39:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60807 ]] 00:06:27.241 14:39:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60807 00:06:27.241 14:39:05 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60807 ']' 00:06:27.241 14:39:05 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60807 00:06:27.241 14:39:05 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:27.241 14:39:05 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.241 14:39:05 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60807 00:06:27.241 killing process with pid 60807 00:06:27.241 14:39:05 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.241 14:39:05 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.241 14:39:05 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60807' 00:06:27.241 14:39:05 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60807 00:06:27.241 14:39:05 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60807 00:06:29.781 14:39:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60825 ]] 00:06:29.781 14:39:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60825 00:06:29.781 14:39:07 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60825 ']' 00:06:29.781 14:39:07 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60825 00:06:29.781 14:39:07 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:29.781 14:39:07 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.781 14:39:07 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60825 00:06:29.781 14:39:07 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:29.781 killing process with pid 60825 00:06:29.781 14:39:07 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:29.781 14:39:07 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60825' 00:06:29.781 14:39:07 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60825 00:06:29.781 14:39:07 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60825 00:06:32.313 14:39:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:32.313 14:39:10 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:32.313 Process with pid 60807 is not found 00:06:32.313 Process with pid 60825 is not found 00:06:32.313 14:39:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60807 ]] 00:06:32.313 14:39:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60807 00:06:32.313 14:39:10 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60807 ']' 00:06:32.313 14:39:10 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60807 00:06:32.313 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60807) - No such process 00:06:32.313 14:39:10 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60807 is not found' 00:06:32.313 14:39:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60825 ]] 00:06:32.313 14:39:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60825 00:06:32.313 14:39:10 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60825 ']' 00:06:32.313 14:39:10 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60825 00:06:32.313 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60825) - No such process 00:06:32.313 14:39:10 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60825 is not found' 00:06:32.313 14:39:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:32.313 00:06:32.313 real 0m51.984s 00:06:32.313 user 1m29.113s 00:06:32.313 sys 0m6.616s 00:06:32.313 14:39:10 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.313 14:39:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.313 ************************************ 00:06:32.313 END TEST cpu_locks 00:06:32.313 ************************************ 00:06:32.313 00:06:32.313 real 1m24.481s 00:06:32.313 user 2m34.394s 00:06:32.313 sys 0m10.830s 00:06:32.313 14:39:10 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.313 14:39:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:32.313 ************************************ 00:06:32.313 END TEST event 00:06:32.313 ************************************ 00:06:32.313 14:39:10 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:32.313 14:39:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:32.313 14:39:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.314 14:39:10 -- common/autotest_common.sh@10 -- # set +x 00:06:32.314 ************************************ 00:06:32.314 START TEST thread 00:06:32.314 ************************************ 00:06:32.314 14:39:10 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:32.573 * Looking for test storage... 00:06:32.573 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:32.573 14:39:10 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:32.573 14:39:10 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:32.573 14:39:10 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:32.573 14:39:10 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:32.573 14:39:10 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:32.573 14:39:10 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:32.573 14:39:10 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:32.573 14:39:10 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.573 14:39:10 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:32.573 14:39:10 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:32.573 14:39:10 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:32.573 14:39:10 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:32.573 14:39:10 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:32.573 14:39:10 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:32.573 14:39:10 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:32.573 14:39:10 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:32.573 14:39:10 thread -- scripts/common.sh@345 -- # : 1 00:06:32.573 14:39:10 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:32.573 14:39:10 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.573 14:39:10 thread -- scripts/common.sh@365 -- # decimal 1 00:06:32.573 14:39:10 thread -- scripts/common.sh@353 -- # local d=1 00:06:32.573 14:39:10 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.573 14:39:10 thread -- scripts/common.sh@355 -- # echo 1 00:06:32.573 14:39:10 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:32.573 14:39:10 thread -- scripts/common.sh@366 -- # decimal 2 00:06:32.573 14:39:10 thread -- scripts/common.sh@353 -- # local d=2 00:06:32.573 14:39:10 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.573 14:39:10 thread -- scripts/common.sh@355 -- # echo 2 00:06:32.573 14:39:10 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:32.573 14:39:10 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:32.573 14:39:10 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:32.573 14:39:10 thread -- scripts/common.sh@368 -- # return 0 00:06:32.573 14:39:10 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.573 14:39:10 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:32.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.573 --rc genhtml_branch_coverage=1 00:06:32.573 --rc genhtml_function_coverage=1 00:06:32.573 --rc genhtml_legend=1 00:06:32.573 --rc geninfo_all_blocks=1 00:06:32.573 --rc geninfo_unexecuted_blocks=1 00:06:32.573 00:06:32.573 ' 00:06:32.573 14:39:10 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:32.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.573 --rc genhtml_branch_coverage=1 00:06:32.573 --rc genhtml_function_coverage=1 00:06:32.573 --rc genhtml_legend=1 00:06:32.573 --rc geninfo_all_blocks=1 00:06:32.573 --rc geninfo_unexecuted_blocks=1 00:06:32.573 00:06:32.573 ' 00:06:32.573 14:39:10 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:32.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.573 --rc genhtml_branch_coverage=1 00:06:32.573 --rc genhtml_function_coverage=1 00:06:32.573 --rc genhtml_legend=1 00:06:32.573 --rc geninfo_all_blocks=1 00:06:32.573 --rc geninfo_unexecuted_blocks=1 00:06:32.573 00:06:32.573 ' 00:06:32.574 14:39:10 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:32.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.574 --rc genhtml_branch_coverage=1 00:06:32.574 --rc genhtml_function_coverage=1 00:06:32.574 --rc genhtml_legend=1 00:06:32.574 --rc geninfo_all_blocks=1 00:06:32.574 --rc geninfo_unexecuted_blocks=1 00:06:32.574 00:06:32.574 ' 00:06:32.574 14:39:10 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:32.574 14:39:10 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:32.574 14:39:10 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.574 14:39:10 thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.574 ************************************ 00:06:32.574 START TEST thread_poller_perf 00:06:32.574 ************************************ 00:06:32.574 14:39:10 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:32.833 [2024-12-09 14:39:10.707387] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:06:32.833 [2024-12-09 14:39:10.707554] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61026 ] 00:06:32.833 [2024-12-09 14:39:10.881593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.092 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:33.092 [2024-12-09 14:39:10.995443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.471 [2024-12-09T14:39:12.593Z] ====================================== 00:06:34.471 [2024-12-09T14:39:12.593Z] busy:2298871520 (cyc) 00:06:34.471 [2024-12-09T14:39:12.593Z] total_run_count: 396000 00:06:34.471 [2024-12-09T14:39:12.593Z] tsc_hz: 2290000000 (cyc) 00:06:34.471 [2024-12-09T14:39:12.593Z] ====================================== 00:06:34.471 [2024-12-09T14:39:12.593Z] poller_cost: 5805 (cyc), 2534 (nsec) 00:06:34.471 ************************************ 00:06:34.471 END TEST thread_poller_perf 00:06:34.471 ************************************ 00:06:34.471 00:06:34.471 real 0m1.561s 00:06:34.471 user 0m1.358s 00:06:34.471 sys 0m0.096s 00:06:34.471 14:39:12 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.471 14:39:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:34.471 14:39:12 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:34.471 14:39:12 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:34.471 14:39:12 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.471 14:39:12 thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.471 ************************************ 00:06:34.471 START TEST thread_poller_perf 00:06:34.471 ************************************ 00:06:34.471 14:39:12 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:34.471 [2024-12-09 14:39:12.333510] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:06:34.471 [2024-12-09 14:39:12.333639] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61062 ] 00:06:34.471 [2024-12-09 14:39:12.504132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.730 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:34.730 [2024-12-09 14:39:12.620286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.110 [2024-12-09T14:39:14.232Z] ====================================== 00:06:36.110 [2024-12-09T14:39:14.232Z] busy:2293145700 (cyc) 00:06:36.110 [2024-12-09T14:39:14.232Z] total_run_count: 4791000 00:06:36.110 [2024-12-09T14:39:14.232Z] tsc_hz: 2290000000 (cyc) 00:06:36.110 [2024-12-09T14:39:14.232Z] ====================================== 00:06:36.110 [2024-12-09T14:39:14.232Z] poller_cost: 478 (cyc), 208 (nsec) 00:06:36.110 00:06:36.110 real 0m1.567s 00:06:36.110 user 0m1.369s 00:06:36.110 sys 0m0.091s 00:06:36.110 ************************************ 00:06:36.110 END TEST thread_poller_perf 00:06:36.110 ************************************ 00:06:36.110 14:39:13 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.110 14:39:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:36.110 14:39:13 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:36.110 ************************************ 00:06:36.110 END TEST thread 00:06:36.110 ************************************ 00:06:36.110 00:06:36.110 real 0m3.486s 00:06:36.110 user 0m2.876s 00:06:36.110 sys 0m0.407s 00:06:36.110 14:39:13 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.110 14:39:13 thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.110 14:39:13 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:36.110 14:39:13 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:36.110 14:39:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.110 14:39:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.110 14:39:13 -- common/autotest_common.sh@10 -- # set +x 00:06:36.110 ************************************ 00:06:36.110 START TEST app_cmdline 00:06:36.110 ************************************ 00:06:36.110 14:39:13 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:36.110 * Looking for test storage... 00:06:36.110 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:36.110 14:39:14 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:36.110 14:39:14 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:36.110 14:39:14 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:36.110 14:39:14 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:36.110 14:39:14 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:36.110 14:39:14 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:36.110 14:39:14 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:36.110 14:39:14 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.110 14:39:14 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:36.110 14:39:14 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:36.110 14:39:14 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:36.110 14:39:14 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:36.110 14:39:14 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:36.110 14:39:14 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:36.110 14:39:14 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:36.110 14:39:14 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:36.110 14:39:14 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:36.110 14:39:14 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:36.110 14:39:14 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.110 14:39:14 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:36.110 14:39:14 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:36.110 14:39:14 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.110 14:39:14 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:36.110 14:39:14 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:36.110 14:39:14 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:36.110 14:39:14 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:36.110 14:39:14 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.111 14:39:14 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:36.111 14:39:14 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:36.111 14:39:14 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:36.111 14:39:14 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:36.111 14:39:14 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:36.111 14:39:14 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.111 14:39:14 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:36.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.111 --rc genhtml_branch_coverage=1 00:06:36.111 --rc genhtml_function_coverage=1 00:06:36.111 --rc genhtml_legend=1 00:06:36.111 --rc geninfo_all_blocks=1 00:06:36.111 --rc geninfo_unexecuted_blocks=1 00:06:36.111 00:06:36.111 ' 00:06:36.111 14:39:14 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:36.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.111 --rc genhtml_branch_coverage=1 00:06:36.111 --rc genhtml_function_coverage=1 00:06:36.111 --rc genhtml_legend=1 00:06:36.111 --rc geninfo_all_blocks=1 00:06:36.111 --rc geninfo_unexecuted_blocks=1 00:06:36.111 00:06:36.111 ' 00:06:36.111 14:39:14 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:36.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.111 --rc genhtml_branch_coverage=1 00:06:36.111 --rc genhtml_function_coverage=1 00:06:36.111 --rc genhtml_legend=1 00:06:36.111 --rc geninfo_all_blocks=1 00:06:36.111 --rc geninfo_unexecuted_blocks=1 00:06:36.111 00:06:36.111 ' 00:06:36.111 14:39:14 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:36.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.111 --rc genhtml_branch_coverage=1 00:06:36.111 --rc genhtml_function_coverage=1 00:06:36.111 --rc genhtml_legend=1 00:06:36.111 --rc geninfo_all_blocks=1 00:06:36.111 --rc geninfo_unexecuted_blocks=1 00:06:36.111 00:06:36.111 ' 00:06:36.111 14:39:14 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:36.111 14:39:14 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61151 00:06:36.111 14:39:14 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:36.111 14:39:14 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61151 00:06:36.111 14:39:14 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 61151 ']' 00:06:36.111 14:39:14 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.111 14:39:14 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.111 14:39:14 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.111 14:39:14 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.111 14:39:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:36.370 [2024-12-09 14:39:14.299102] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:06:36.370 [2024-12-09 14:39:14.299328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61151 ] 00:06:36.370 [2024-12-09 14:39:14.474911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.629 [2024-12-09 14:39:14.591324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.579 14:39:15 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.579 14:39:15 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:37.579 14:39:15 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:37.579 { 00:06:37.579 "version": "SPDK v25.01-pre git sha1 805149865", 00:06:37.579 "fields": { 00:06:37.579 "major": 25, 00:06:37.579 "minor": 1, 00:06:37.579 "patch": 0, 00:06:37.579 "suffix": "-pre", 00:06:37.579 "commit": "805149865" 00:06:37.579 } 00:06:37.579 } 00:06:37.579 14:39:15 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:37.579 14:39:15 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:37.579 14:39:15 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:37.579 14:39:15 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:37.579 14:39:15 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:37.579 14:39:15 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:37.579 14:39:15 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:37.579 14:39:15 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.579 14:39:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:37.579 14:39:15 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.870 14:39:15 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:37.870 14:39:15 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:37.870 14:39:15 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:37.870 14:39:15 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:37.870 14:39:15 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:37.870 14:39:15 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:37.870 14:39:15 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:37.870 14:39:15 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:37.870 14:39:15 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:37.870 14:39:15 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:37.870 14:39:15 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:37.870 14:39:15 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:37.870 14:39:15 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:37.870 14:39:15 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:37.870 request: 00:06:37.870 { 00:06:37.870 "method": "env_dpdk_get_mem_stats", 00:06:37.870 "req_id": 1 00:06:37.870 } 00:06:37.870 Got JSON-RPC error response 00:06:37.870 response: 00:06:37.870 { 00:06:37.870 "code": -32601, 00:06:37.870 "message": "Method not found" 00:06:37.870 } 00:06:37.870 14:39:15 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:37.870 14:39:15 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:37.870 14:39:15 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:37.870 14:39:15 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:37.870 14:39:15 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61151 00:06:37.870 14:39:15 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 61151 ']' 00:06:37.870 14:39:15 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 61151 00:06:37.870 14:39:15 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:37.870 14:39:15 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.870 14:39:15 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61151 00:06:37.870 14:39:15 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.870 14:39:15 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.870 14:39:15 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61151' 00:06:37.870 killing process with pid 61151 00:06:37.870 14:39:15 app_cmdline -- common/autotest_common.sh@973 -- # kill 61151 00:06:37.870 14:39:15 app_cmdline -- common/autotest_common.sh@978 -- # wait 61151 00:06:40.408 00:06:40.408 real 0m4.391s 00:06:40.408 user 0m4.629s 00:06:40.408 sys 0m0.583s 00:06:40.408 14:39:18 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.408 14:39:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:40.408 ************************************ 00:06:40.408 END TEST app_cmdline 00:06:40.408 ************************************ 00:06:40.408 14:39:18 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:40.408 14:39:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.408 14:39:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.408 14:39:18 -- common/autotest_common.sh@10 -- # set +x 00:06:40.408 ************************************ 00:06:40.408 START TEST version 00:06:40.408 ************************************ 00:06:40.408 14:39:18 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:40.668 * Looking for test storage... 00:06:40.668 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:40.668 14:39:18 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:40.668 14:39:18 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:40.668 14:39:18 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:40.668 14:39:18 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:40.668 14:39:18 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.668 14:39:18 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.668 14:39:18 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.668 14:39:18 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.668 14:39:18 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.668 14:39:18 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.668 14:39:18 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.668 14:39:18 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.668 14:39:18 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.668 14:39:18 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.668 14:39:18 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.668 14:39:18 version -- scripts/common.sh@344 -- # case "$op" in 00:06:40.668 14:39:18 version -- scripts/common.sh@345 -- # : 1 00:06:40.668 14:39:18 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.668 14:39:18 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.668 14:39:18 version -- scripts/common.sh@365 -- # decimal 1 00:06:40.668 14:39:18 version -- scripts/common.sh@353 -- # local d=1 00:06:40.668 14:39:18 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.668 14:39:18 version -- scripts/common.sh@355 -- # echo 1 00:06:40.668 14:39:18 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.669 14:39:18 version -- scripts/common.sh@366 -- # decimal 2 00:06:40.669 14:39:18 version -- scripts/common.sh@353 -- # local d=2 00:06:40.669 14:39:18 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.669 14:39:18 version -- scripts/common.sh@355 -- # echo 2 00:06:40.669 14:39:18 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.669 14:39:18 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.669 14:39:18 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.669 14:39:18 version -- scripts/common.sh@368 -- # return 0 00:06:40.669 14:39:18 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.669 14:39:18 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:40.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.669 --rc genhtml_branch_coverage=1 00:06:40.669 --rc genhtml_function_coverage=1 00:06:40.669 --rc genhtml_legend=1 00:06:40.669 --rc geninfo_all_blocks=1 00:06:40.669 --rc geninfo_unexecuted_blocks=1 00:06:40.669 00:06:40.669 ' 00:06:40.669 14:39:18 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:40.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.669 --rc genhtml_branch_coverage=1 00:06:40.669 --rc genhtml_function_coverage=1 00:06:40.669 --rc genhtml_legend=1 00:06:40.669 --rc geninfo_all_blocks=1 00:06:40.669 --rc geninfo_unexecuted_blocks=1 00:06:40.669 00:06:40.669 ' 00:06:40.669 14:39:18 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:40.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.669 --rc genhtml_branch_coverage=1 00:06:40.669 --rc genhtml_function_coverage=1 00:06:40.669 --rc genhtml_legend=1 00:06:40.669 --rc geninfo_all_blocks=1 00:06:40.669 --rc geninfo_unexecuted_blocks=1 00:06:40.669 00:06:40.669 ' 00:06:40.669 14:39:18 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:40.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.669 --rc genhtml_branch_coverage=1 00:06:40.669 --rc genhtml_function_coverage=1 00:06:40.669 --rc genhtml_legend=1 00:06:40.669 --rc geninfo_all_blocks=1 00:06:40.669 --rc geninfo_unexecuted_blocks=1 00:06:40.669 00:06:40.669 ' 00:06:40.669 14:39:18 version -- app/version.sh@17 -- # get_header_version major 00:06:40.669 14:39:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:40.669 14:39:18 version -- app/version.sh@14 -- # cut -f2 00:06:40.669 14:39:18 version -- app/version.sh@14 -- # tr -d '"' 00:06:40.669 14:39:18 version -- app/version.sh@17 -- # major=25 00:06:40.669 14:39:18 version -- app/version.sh@18 -- # get_header_version minor 00:06:40.669 14:39:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:40.669 14:39:18 version -- app/version.sh@14 -- # cut -f2 00:06:40.669 14:39:18 version -- app/version.sh@14 -- # tr -d '"' 00:06:40.669 14:39:18 version -- app/version.sh@18 -- # minor=1 00:06:40.669 14:39:18 version -- app/version.sh@19 -- # get_header_version patch 00:06:40.669 14:39:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:40.669 14:39:18 version -- app/version.sh@14 -- # cut -f2 00:06:40.669 14:39:18 version -- app/version.sh@14 -- # tr -d '"' 00:06:40.669 14:39:18 version -- app/version.sh@19 -- # patch=0 00:06:40.669 14:39:18 version -- app/version.sh@20 -- # get_header_version suffix 00:06:40.669 14:39:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:40.669 14:39:18 version -- app/version.sh@14 -- # cut -f2 00:06:40.669 14:39:18 version -- app/version.sh@14 -- # tr -d '"' 00:06:40.669 14:39:18 version -- app/version.sh@20 -- # suffix=-pre 00:06:40.669 14:39:18 version -- app/version.sh@22 -- # version=25.1 00:06:40.669 14:39:18 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:40.669 14:39:18 version -- app/version.sh@28 -- # version=25.1rc0 00:06:40.669 14:39:18 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:40.669 14:39:18 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:40.669 14:39:18 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:40.669 14:39:18 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:40.669 ************************************ 00:06:40.669 END TEST version 00:06:40.669 ************************************ 00:06:40.669 00:06:40.669 real 0m0.290s 00:06:40.669 user 0m0.174s 00:06:40.669 sys 0m0.165s 00:06:40.669 14:39:18 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.669 14:39:18 version -- common/autotest_common.sh@10 -- # set +x 00:06:40.669 14:39:18 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:40.669 14:39:18 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:40.669 14:39:18 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:40.669 14:39:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.669 14:39:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.669 14:39:18 -- common/autotest_common.sh@10 -- # set +x 00:06:40.669 ************************************ 00:06:40.669 START TEST bdev_raid 00:06:40.669 ************************************ 00:06:40.669 14:39:18 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:40.929 * Looking for test storage... 00:06:40.929 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:40.929 14:39:18 bdev_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:40.929 14:39:18 bdev_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:06:40.929 14:39:18 bdev_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:40.929 14:39:18 bdev_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:40.929 14:39:18 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.930 14:39:18 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.930 14:39:18 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.930 14:39:18 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.930 14:39:18 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.930 14:39:18 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.930 14:39:18 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.930 14:39:18 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.930 14:39:18 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.930 14:39:18 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.930 14:39:18 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.930 14:39:18 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:40.930 14:39:18 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:40.930 14:39:18 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.930 14:39:18 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.930 14:39:18 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:40.930 14:39:18 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:40.930 14:39:18 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.930 14:39:18 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:40.930 14:39:18 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.930 14:39:18 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:40.930 14:39:18 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:40.930 14:39:18 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.930 14:39:18 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:40.930 14:39:18 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.930 14:39:18 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.930 14:39:18 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.930 14:39:18 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:40.930 14:39:18 bdev_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.930 14:39:18 bdev_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:40.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.930 --rc genhtml_branch_coverage=1 00:06:40.930 --rc genhtml_function_coverage=1 00:06:40.930 --rc genhtml_legend=1 00:06:40.930 --rc geninfo_all_blocks=1 00:06:40.930 --rc geninfo_unexecuted_blocks=1 00:06:40.930 00:06:40.930 ' 00:06:40.930 14:39:18 bdev_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:40.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.930 --rc genhtml_branch_coverage=1 00:06:40.930 --rc genhtml_function_coverage=1 00:06:40.930 --rc genhtml_legend=1 00:06:40.930 --rc geninfo_all_blocks=1 00:06:40.930 --rc geninfo_unexecuted_blocks=1 00:06:40.930 00:06:40.930 ' 00:06:40.930 14:39:18 bdev_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:40.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.930 --rc genhtml_branch_coverage=1 00:06:40.930 --rc genhtml_function_coverage=1 00:06:40.930 --rc genhtml_legend=1 00:06:40.930 --rc geninfo_all_blocks=1 00:06:40.930 --rc geninfo_unexecuted_blocks=1 00:06:40.930 00:06:40.930 ' 00:06:40.930 14:39:18 bdev_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:40.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.930 --rc genhtml_branch_coverage=1 00:06:40.930 --rc genhtml_function_coverage=1 00:06:40.930 --rc genhtml_legend=1 00:06:40.930 --rc geninfo_all_blocks=1 00:06:40.930 --rc geninfo_unexecuted_blocks=1 00:06:40.930 00:06:40.930 ' 00:06:40.930 14:39:18 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:40.930 14:39:18 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:40.930 14:39:18 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:40.930 14:39:18 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:40.930 14:39:18 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:40.930 14:39:18 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:40.930 14:39:18 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:40.930 14:39:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.930 14:39:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.930 14:39:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:40.930 ************************************ 00:06:40.930 START TEST raid1_resize_data_offset_test 00:06:40.930 ************************************ 00:06:40.930 14:39:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:06:40.930 14:39:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=61339 00:06:40.930 14:39:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 61339' 00:06:40.930 Process raid pid: 61339 00:06:40.930 14:39:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 61339 00:06:40.930 14:39:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 61339 ']' 00:06:40.930 14:39:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.930 14:39:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.930 14:39:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:40.930 14:39:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.930 14:39:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.930 14:39:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.190 [2024-12-09 14:39:19.074771] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:06:41.190 [2024-12-09 14:39:19.074984] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:41.190 [2024-12-09 14:39:19.247095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.449 [2024-12-09 14:39:19.361414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.449 [2024-12-09 14:39:19.569389] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:41.449 [2024-12-09 14:39:19.569528] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:42.018 14:39:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.018 14:39:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:06:42.018 14:39:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:42.018 14:39:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.018 14:39:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.018 malloc0 00:06:42.018 14:39:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.018 14:39:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:42.018 14:39:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.018 14:39:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.018 malloc1 00:06:42.018 14:39:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.018 14:39:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:42.018 14:39:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.018 14:39:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.018 null0 00:06:42.018 14:39:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.018 14:39:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:42.018 14:39:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.018 14:39:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.018 [2024-12-09 14:39:20.077436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:42.018 [2024-12-09 14:39:20.079524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:42.018 [2024-12-09 14:39:20.079647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:42.018 [2024-12-09 14:39:20.079860] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:42.018 [2024-12-09 14:39:20.079927] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:42.018 [2024-12-09 14:39:20.080258] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:42.018 [2024-12-09 14:39:20.080482] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:42.018 [2024-12-09 14:39:20.080530] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:42.018 [2024-12-09 14:39:20.080762] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:42.018 14:39:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.018 14:39:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:42.018 14:39:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:42.018 14:39:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.018 14:39:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.018 14:39:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.018 14:39:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:42.018 14:39:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:42.018 14:39:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.018 14:39:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.018 [2024-12-09 14:39:20.133290] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:42.018 14:39:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.018 14:39:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:42.018 14:39:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.018 14:39:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.587 malloc2 00:06:42.587 14:39:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.587 14:39:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:42.587 14:39:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.587 14:39:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.587 [2024-12-09 14:39:20.668045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:42.587 [2024-12-09 14:39:20.684539] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:42.587 14:39:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.587 [2024-12-09 14:39:20.686394] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:42.587 14:39:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:42.587 14:39:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:42.587 14:39:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.587 14:39:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.847 14:39:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.847 14:39:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:42.847 14:39:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 61339 00:06:42.847 14:39:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 61339 ']' 00:06:42.847 14:39:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 61339 00:06:42.847 14:39:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:06:42.847 14:39:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.847 14:39:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61339 00:06:42.847 killing process with pid 61339 00:06:42.847 14:39:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.847 14:39:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.847 14:39:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61339' 00:06:42.847 14:39:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 61339 00:06:42.847 14:39:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 61339 00:06:42.847 [2024-12-09 14:39:20.773887] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:42.847 [2024-12-09 14:39:20.774164] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:42.847 [2024-12-09 14:39:20.774215] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:42.847 [2024-12-09 14:39:20.774233] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:42.847 [2024-12-09 14:39:20.809886] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:42.847 [2024-12-09 14:39:20.810209] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:42.847 [2024-12-09 14:39:20.810226] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:44.764 [2024-12-09 14:39:22.605601] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:45.701 ************************************ 00:06:45.702 END TEST raid1_resize_data_offset_test 00:06:45.702 ************************************ 00:06:45.702 14:39:23 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:45.702 00:06:45.702 real 0m4.739s 00:06:45.702 user 0m4.646s 00:06:45.702 sys 0m0.512s 00:06:45.702 14:39:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.702 14:39:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.702 14:39:23 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:45.702 14:39:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:45.702 14:39:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.702 14:39:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:45.702 ************************************ 00:06:45.702 START TEST raid0_resize_superblock_test 00:06:45.702 ************************************ 00:06:45.702 14:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:06:45.702 14:39:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:45.702 Process raid pid: 61422 00:06:45.702 14:39:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=61422 00:06:45.702 14:39:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 61422' 00:06:45.702 14:39:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 61422 00:06:45.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.702 14:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61422 ']' 00:06:45.702 14:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.702 14:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.702 14:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.702 14:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.702 14:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.702 14:39:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:45.961 [2024-12-09 14:39:23.874954] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:06:45.961 [2024-12-09 14:39:23.875516] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:45.961 [2024-12-09 14:39:24.048414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.219 [2024-12-09 14:39:24.163876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.478 [2024-12-09 14:39:24.366128] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:46.478 [2024-12-09 14:39:24.366175] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:46.738 14:39:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.738 14:39:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:46.738 14:39:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:46.738 14:39:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.738 14:39:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.306 malloc0 00:06:47.306 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.306 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:47.306 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.306 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.306 [2024-12-09 14:39:25.254174] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:47.306 [2024-12-09 14:39:25.254238] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:47.306 [2024-12-09 14:39:25.254261] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:47.306 [2024-12-09 14:39:25.254273] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:47.306 [2024-12-09 14:39:25.256353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:47.306 [2024-12-09 14:39:25.256395] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:47.306 pt0 00:06:47.306 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.306 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:47.306 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.306 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.306 692038be-4d5d-4270-b909-9b1bb643cfcd 00:06:47.306 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.306 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:47.306 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.306 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.306 1bb23415-3eb2-42ae-aa3d-40b4ff2fb971 00:06:47.306 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.306 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:47.306 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.306 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.306 b690bbd2-d6e2-45dd-8fed-ecc4d6d2764e 00:06:47.306 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.306 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:47.306 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:47.306 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.306 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.306 [2024-12-09 14:39:25.380110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 1bb23415-3eb2-42ae-aa3d-40b4ff2fb971 is claimed 00:06:47.306 [2024-12-09 14:39:25.380202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b690bbd2-d6e2-45dd-8fed-ecc4d6d2764e is claimed 00:06:47.306 [2024-12-09 14:39:25.380342] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:47.306 [2024-12-09 14:39:25.380359] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:47.306 [2024-12-09 14:39:25.380647] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:47.306 [2024-12-09 14:39:25.380851] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:47.306 [2024-12-09 14:39:25.380867] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:47.306 [2024-12-09 14:39:25.381028] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:47.306 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.306 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:47.306 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:47.306 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.306 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.306 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.566 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:47.566 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:47.566 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.566 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.566 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:47.566 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.566 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:47.566 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:47.566 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:47.566 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:47.566 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:47.566 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.566 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.566 [2024-12-09 14:39:25.488200] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:47.566 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.566 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:47.566 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:47.566 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:47.566 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:47.566 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.566 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.566 [2024-12-09 14:39:25.512077] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:47.566 [2024-12-09 14:39:25.512145] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '1bb23415-3eb2-42ae-aa3d-40b4ff2fb971' was resized: old size 131072, new size 204800 00:06:47.566 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.566 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:47.566 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.566 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.566 [2024-12-09 14:39:25.519961] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:47.566 [2024-12-09 14:39:25.520024] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'b690bbd2-d6e2-45dd-8fed-ecc4d6d2764e' was resized: old size 131072, new size 204800 00:06:47.566 [2024-12-09 14:39:25.520082] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:47.566 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.566 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:47.566 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:47.566 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.566 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.566 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.566 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:47.566 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:47.566 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:47.566 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.566 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.566 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.566 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:47.567 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:47.567 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:47.567 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:47.567 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:47.567 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.567 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.567 [2024-12-09 14:39:25.611960] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:47.567 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.567 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:47.567 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:47.567 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:47.567 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:47.567 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.567 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.567 [2024-12-09 14:39:25.655721] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:47.567 [2024-12-09 14:39:25.655794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:47.567 [2024-12-09 14:39:25.655808] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:47.567 [2024-12-09 14:39:25.655822] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:47.567 [2024-12-09 14:39:25.655953] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:47.567 [2024-12-09 14:39:25.655991] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:47.567 [2024-12-09 14:39:25.656002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:47.567 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.567 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:47.567 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.567 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.567 [2024-12-09 14:39:25.663582] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:47.567 [2024-12-09 14:39:25.663645] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:47.567 [2024-12-09 14:39:25.663666] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:47.567 [2024-12-09 14:39:25.663678] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:47.567 [2024-12-09 14:39:25.665916] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:47.567 [2024-12-09 14:39:25.665997] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:47.567 [2024-12-09 14:39:25.667852] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 1bb23415-3eb2-42ae-aa3d-40b4ff2fb971 00:06:47.567 [2024-12-09 14:39:25.667942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 1bb23415-3eb2-42ae-aa3d-40b4ff2fb971 is claimed 00:06:47.567 [2024-12-09 14:39:25.668064] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev b690bbd2-d6e2-45dd-8fed-ecc4d6d2764e 00:06:47.567 [2024-12-09 14:39:25.668083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b690bbd2-d6e2-45dd-8fed-ecc4d6d2764e is claimed 00:06:47.567 [2024-12-09 14:39:25.668262] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev b690bbd2-d6e2-45dd-8fed-ecc4d6d2764e (2) smaller than existing raid bdev Raid (3) 00:06:47.567 [2024-12-09 14:39:25.668286] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 1bb23415-3eb2-42ae-aa3d-40b4ff2fb971: File exists 00:06:47.567 [2024-12-09 14:39:25.668322] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:47.567 [2024-12-09 14:39:25.668334] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:47.567 pt0 00:06:47.567 [2024-12-09 14:39:25.668599] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:47.567 [2024-12-09 14:39:25.668756] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:47.567 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.567 [2024-12-09 14:39:25.668765] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:47.567 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:47.567 [2024-12-09 14:39:25.668938] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:47.567 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.567 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.567 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.567 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:47.567 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:47.567 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:47.567 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:47.567 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.567 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.567 [2024-12-09 14:39:25.684112] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:47.826 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.826 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:47.826 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:47.827 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:47.827 14:39:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 61422 00:06:47.827 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61422 ']' 00:06:47.827 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61422 00:06:47.827 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:47.827 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.827 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61422 00:06:47.827 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.827 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.827 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61422' 00:06:47.827 killing process with pid 61422 00:06:47.827 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 61422 00:06:47.827 [2024-12-09 14:39:25.747764] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:47.827 14:39:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 61422 00:06:47.827 [2024-12-09 14:39:25.747906] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:47.827 [2024-12-09 14:39:25.747988] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:47.827 [2024-12-09 14:39:25.748034] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:49.205 [2024-12-09 14:39:27.180503] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:50.583 14:39:28 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:50.583 00:06:50.583 real 0m4.538s 00:06:50.583 user 0m4.694s 00:06:50.583 sys 0m0.558s 00:06:50.583 14:39:28 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.583 14:39:28 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.583 ************************************ 00:06:50.583 END TEST raid0_resize_superblock_test 00:06:50.583 ************************************ 00:06:50.583 14:39:28 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:50.583 14:39:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:50.583 14:39:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.583 14:39:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:50.583 ************************************ 00:06:50.583 START TEST raid1_resize_superblock_test 00:06:50.583 ************************************ 00:06:50.583 14:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:06:50.583 14:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:50.583 14:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=61521 00:06:50.583 14:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 61521' 00:06:50.583 Process raid pid: 61521 00:06:50.583 14:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 61521 00:06:50.583 14:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61521 ']' 00:06:50.583 14:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.583 14:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.583 14:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.583 14:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:50.583 14:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.583 14:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.583 [2024-12-09 14:39:28.475160] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:06:50.583 [2024-12-09 14:39:28.475317] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:50.583 [2024-12-09 14:39:28.652765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.842 [2024-12-09 14:39:28.770736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.102 [2024-12-09 14:39:28.986560] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:51.102 [2024-12-09 14:39:28.986606] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:51.361 14:39:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.361 14:39:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:51.361 14:39:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:51.361 14:39:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.361 14:39:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.930 malloc0 00:06:51.930 14:39:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.930 14:39:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:51.930 14:39:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.930 14:39:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.930 [2024-12-09 14:39:29.856835] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:51.930 [2024-12-09 14:39:29.856909] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:51.930 [2024-12-09 14:39:29.856933] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:51.930 [2024-12-09 14:39:29.856944] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:51.930 [2024-12-09 14:39:29.859361] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:51.930 [2024-12-09 14:39:29.859409] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:51.930 pt0 00:06:51.930 14:39:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.930 14:39:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:51.930 14:39:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.930 14:39:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.930 05fb65af-b939-4b60-a127-8a122afde5a4 00:06:51.930 14:39:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.930 14:39:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:51.930 14:39:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.930 14:39:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.930 05a336ff-fc43-45d1-9cef-b96583c5ea06 00:06:51.930 14:39:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.930 14:39:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:51.930 14:39:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.930 14:39:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.930 6a090a78-b8d3-4aac-ac56-97d40faaf1af 00:06:51.930 14:39:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.930 14:39:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:51.930 14:39:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:51.930 14:39:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.930 14:39:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.930 [2024-12-09 14:39:29.975205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 05a336ff-fc43-45d1-9cef-b96583c5ea06 is claimed 00:06:51.930 [2024-12-09 14:39:29.975291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 6a090a78-b8d3-4aac-ac56-97d40faaf1af is claimed 00:06:51.930 [2024-12-09 14:39:29.975425] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:51.930 [2024-12-09 14:39:29.975440] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:51.930 [2024-12-09 14:39:29.975703] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:51.930 [2024-12-09 14:39:29.975887] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:51.930 [2024-12-09 14:39:29.975904] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:51.930 [2024-12-09 14:39:29.976067] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:51.930 14:39:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.930 14:39:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:51.930 14:39:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.930 14:39:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:51.930 14:39:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.930 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.930 14:39:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:51.930 14:39:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:51.930 14:39:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:51.930 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.930 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.190 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.190 14:39:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:52.190 14:39:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:52.190 14:39:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:52.190 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.190 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.190 14:39:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:52.190 14:39:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:52.190 [2024-12-09 14:39:30.083267] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:52.190 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.190 14:39:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:52.190 14:39:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.191 [2024-12-09 14:39:30.131156] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:52.191 [2024-12-09 14:39:30.131184] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '05a336ff-fc43-45d1-9cef-b96583c5ea06' was resized: old size 131072, new size 204800 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.191 [2024-12-09 14:39:30.139002] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:52.191 [2024-12-09 14:39:30.139025] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '6a090a78-b8d3-4aac-ac56-97d40faaf1af' was resized: old size 131072, new size 204800 00:06:52.191 [2024-12-09 14:39:30.139053] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.191 [2024-12-09 14:39:30.246996] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.191 [2024-12-09 14:39:30.294719] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:52.191 [2024-12-09 14:39:30.294842] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:52.191 [2024-12-09 14:39:30.294875] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:52.191 [2024-12-09 14:39:30.295052] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:52.191 [2024-12-09 14:39:30.295313] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:52.191 [2024-12-09 14:39:30.295379] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:52.191 [2024-12-09 14:39:30.295393] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.191 [2024-12-09 14:39:30.302609] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:52.191 [2024-12-09 14:39:30.302658] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:52.191 [2024-12-09 14:39:30.302693] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:52.191 [2024-12-09 14:39:30.302705] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:52.191 [2024-12-09 14:39:30.304878] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:52.191 [2024-12-09 14:39:30.304918] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:52.191 pt0 00:06:52.191 [2024-12-09 14:39:30.306798] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 05a336ff-fc43-45d1-9cef-b96583c5ea06 00:06:52.191 [2024-12-09 14:39:30.306901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 05a336ff-fc43-45d1-9cef-b96583c5ea06 is claimed 00:06:52.191 [2024-12-09 14:39:30.307027] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 6a090a78-b8d3-4aac-ac56-97d40faaf1af 00:06:52.191 [2024-12-09 14:39:30.307050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 6a090a78-b8d3-4aac-ac56-97d40faaf1af is claimed 00:06:52.191 [2024-12-09 14:39:30.307211] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 6a090a78-b8d3-4aac-ac56-97d40faaf1af (2) smaller than existing raid bdev Raid (3) 00:06:52.191 [2024-12-09 14:39:30.307237] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 05a336ff-fc43-45d1-9cef-b96583c5ea06: File exists 00:06:52.191 [2024-12-09 14:39:30.307273] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:52.191 [2024-12-09 14:39:30.307284] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:52.191 [2024-12-09 14:39:30.307567] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.191 [2024-12-09 14:39:30.307759] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:52.191 [2024-12-09 14:39:30.307773] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:52.191 [2024-12-09 14:39:30.307932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.191 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.450 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.450 14:39:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:52.450 14:39:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:52.450 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.450 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.450 14:39:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:52.450 14:39:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:52.450 [2024-12-09 14:39:30.323086] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:52.450 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.450 14:39:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:52.450 14:39:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:52.450 14:39:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:52.450 14:39:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 61521 00:06:52.450 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61521 ']' 00:06:52.450 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61521 00:06:52.450 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:52.450 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.450 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61521 00:06:52.450 killing process with pid 61521 00:06:52.450 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.450 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.450 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61521' 00:06:52.450 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 61521 00:06:52.450 14:39:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 61521 00:06:52.450 [2024-12-09 14:39:30.403709] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:52.450 [2024-12-09 14:39:30.403814] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:52.450 [2024-12-09 14:39:30.403874] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:52.450 [2024-12-09 14:39:30.403884] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:53.825 [2024-12-09 14:39:31.852782] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:55.204 14:39:33 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:55.205 00:06:55.205 real 0m4.639s 00:06:55.205 user 0m4.833s 00:06:55.205 sys 0m0.560s 00:06:55.205 14:39:33 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.205 14:39:33 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.205 ************************************ 00:06:55.205 END TEST raid1_resize_superblock_test 00:06:55.205 ************************************ 00:06:55.205 14:39:33 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:06:55.205 14:39:33 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:06:55.205 14:39:33 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:06:55.205 14:39:33 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:06:55.205 14:39:33 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:06:55.205 14:39:33 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:55.205 14:39:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:55.205 14:39:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.205 14:39:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:55.205 ************************************ 00:06:55.205 START TEST raid_function_test_raid0 00:06:55.205 ************************************ 00:06:55.205 14:39:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:06:55.205 14:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:06:55.205 14:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:55.205 14:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:55.205 14:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=61628 00:06:55.205 Process raid pid: 61628 00:06:55.205 14:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:55.205 14:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 61628' 00:06:55.205 14:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 61628 00:06:55.205 14:39:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 61628 ']' 00:06:55.205 14:39:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.205 14:39:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.205 14:39:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.205 14:39:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.205 14:39:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:55.205 [2024-12-09 14:39:33.200828] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:06:55.205 [2024-12-09 14:39:33.200945] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:55.555 [2024-12-09 14:39:33.373732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.555 [2024-12-09 14:39:33.492443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.814 [2024-12-09 14:39:33.702861] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:55.814 [2024-12-09 14:39:33.702903] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.074 14:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.074 14:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:06:56.074 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:56.074 14:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.074 14:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:56.074 Base_1 00:06:56.074 14:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.074 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:56.074 14:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.074 14:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:56.074 Base_2 00:06:56.074 14:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.074 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:06:56.074 14:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.074 14:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:56.074 [2024-12-09 14:39:34.113404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:56.074 [2024-12-09 14:39:34.115483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:56.074 [2024-12-09 14:39:34.115558] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:56.074 [2024-12-09 14:39:34.115570] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:56.074 [2024-12-09 14:39:34.115897] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:56.074 [2024-12-09 14:39:34.116083] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:56.074 [2024-12-09 14:39:34.116100] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:56.074 [2024-12-09 14:39:34.116275] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:56.074 14:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.074 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:56.074 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:56.074 14:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.074 14:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:56.074 14:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.074 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:56.074 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:56.074 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:56.074 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:56.074 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:56.074 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:56.074 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:56.074 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:56.074 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:06:56.074 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:56.074 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:56.074 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:56.333 [2024-12-09 14:39:34.353070] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:56.333 /dev/nbd0 00:06:56.333 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:56.333 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:56.333 14:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:56.333 14:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:06:56.333 14:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:56.333 14:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:56.334 14:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:56.334 14:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:06:56.334 14:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:56.334 14:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:56.334 14:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:56.334 1+0 records in 00:06:56.334 1+0 records out 00:06:56.334 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000451078 s, 9.1 MB/s 00:06:56.334 14:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.334 14:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:06:56.334 14:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.334 14:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:56.334 14:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:06:56.334 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.334 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:56.334 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:56.334 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:56.334 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:56.593 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:56.593 { 00:06:56.593 "nbd_device": "/dev/nbd0", 00:06:56.593 "bdev_name": "raid" 00:06:56.593 } 00:06:56.593 ]' 00:06:56.593 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:56.593 { 00:06:56.593 "nbd_device": "/dev/nbd0", 00:06:56.593 "bdev_name": "raid" 00:06:56.593 } 00:06:56.593 ]' 00:06:56.593 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:56.593 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:56.593 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:56.593 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:56.593 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:06:56.593 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:06:56.593 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:06:56.593 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:56.593 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:56.593 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:56.593 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:56.593 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:56.593 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:56.593 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:56.594 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:56.594 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:56.594 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:56.594 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:56.594 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:56.594 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:56.594 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:56.594 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:56.594 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:56.594 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:56.594 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:56.852 4096+0 records in 00:06:56.852 4096+0 records out 00:06:56.852 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0270365 s, 77.6 MB/s 00:06:56.853 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:56.853 4096+0 records in 00:06:56.853 4096+0 records out 00:06:56.853 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.21264 s, 9.9 MB/s 00:06:56.853 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:56.853 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:56.853 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:56.853 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:56.853 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:56.853 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:56.853 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:56.853 128+0 records in 00:06:56.853 128+0 records out 00:06:56.853 65536 bytes (66 kB, 64 KiB) copied, 0.00125206 s, 52.3 MB/s 00:06:56.853 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:57.112 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:57.112 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:57.112 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:57.112 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:57.112 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:57.113 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:57.113 14:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:57.113 2035+0 records in 00:06:57.113 2035+0 records out 00:06:57.113 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00986991 s, 106 MB/s 00:06:57.113 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:57.113 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:57.113 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:57.113 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:57.113 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:57.113 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:57.113 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:57.113 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:57.113 456+0 records in 00:06:57.113 456+0 records out 00:06:57.113 233472 bytes (233 kB, 228 KiB) copied, 0.00406051 s, 57.5 MB/s 00:06:57.113 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:57.113 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:57.113 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:57.113 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:57.113 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:57.113 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:06:57.113 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:57.113 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:57.113 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:57.113 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:57.113 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:06:57.113 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.113 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:57.373 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:57.373 [2024-12-09 14:39:35.302776] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:57.373 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:57.373 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:57.373 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.373 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.373 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:57.373 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:06:57.373 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.373 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:57.373 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:57.373 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:57.631 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:57.631 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:57.631 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:57.631 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:57.631 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:57.631 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:57.631 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:06:57.631 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:06:57.631 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:57.631 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:06:57.631 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:57.631 14:39:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 61628 00:06:57.631 14:39:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 61628 ']' 00:06:57.631 14:39:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 61628 00:06:57.631 14:39:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:06:57.631 14:39:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.631 14:39:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61628 00:06:57.631 killing process with pid 61628 00:06:57.631 14:39:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:57.631 14:39:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:57.631 14:39:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61628' 00:06:57.631 14:39:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 61628 00:06:57.631 [2024-12-09 14:39:35.647913] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:57.631 [2024-12-09 14:39:35.648012] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:57.631 [2024-12-09 14:39:35.648059] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:57.631 14:39:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 61628 00:06:57.631 [2024-12-09 14:39:35.648075] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:57.890 [2024-12-09 14:39:35.866398] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:59.270 14:39:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:06:59.270 00:06:59.270 real 0m3.955s 00:06:59.270 user 0m4.601s 00:06:59.270 sys 0m0.932s 00:06:59.270 14:39:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.270 ************************************ 00:06:59.270 END TEST raid_function_test_raid0 00:06:59.270 ************************************ 00:06:59.270 14:39:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:59.270 14:39:37 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:06:59.270 14:39:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:59.270 14:39:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.270 14:39:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:59.270 ************************************ 00:06:59.270 START TEST raid_function_test_concat 00:06:59.270 ************************************ 00:06:59.270 14:39:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:06:59.270 14:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:06:59.270 14:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:59.270 14:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:59.270 14:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=61747 00:06:59.270 Process raid pid: 61747 00:06:59.270 14:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:59.270 14:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 61747' 00:06:59.270 14:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 61747 00:06:59.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.270 14:39:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 61747 ']' 00:06:59.270 14:39:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.270 14:39:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.270 14:39:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.270 14:39:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.270 14:39:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:59.270 [2024-12-09 14:39:37.220349] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:06:59.270 [2024-12-09 14:39:37.220470] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:59.529 [2024-12-09 14:39:37.396805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.529 [2024-12-09 14:39:37.514684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.788 [2024-12-09 14:39:37.720218] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:59.788 [2024-12-09 14:39:37.720367] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:00.047 14:39:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.047 14:39:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:07:00.047 14:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:00.047 14:39:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.047 14:39:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:00.047 Base_1 00:07:00.047 14:39:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.047 14:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:00.047 14:39:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.047 14:39:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:00.307 Base_2 00:07:00.307 14:39:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.307 14:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:00.307 14:39:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.307 14:39:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:00.307 [2024-12-09 14:39:38.184257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:00.307 [2024-12-09 14:39:38.186236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:00.307 [2024-12-09 14:39:38.186313] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:00.307 [2024-12-09 14:39:38.186327] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:00.307 [2024-12-09 14:39:38.186633] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:00.307 [2024-12-09 14:39:38.186806] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:00.307 [2024-12-09 14:39:38.186817] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:00.307 [2024-12-09 14:39:38.186999] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:00.307 14:39:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.307 14:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:00.307 14:39:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.307 14:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:00.307 14:39:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:00.307 14:39:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.307 14:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:00.307 14:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:00.307 14:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:00.307 14:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:00.307 14:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:00.307 14:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:00.307 14:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:00.307 14:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:00.307 14:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:00.307 14:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:00.307 14:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:00.307 14:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:00.566 [2024-12-09 14:39:38.439928] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:00.566 /dev/nbd0 00:07:00.566 14:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:00.566 14:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:00.566 14:39:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:00.566 14:39:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:07:00.566 14:39:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:00.566 14:39:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:00.566 14:39:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:00.566 14:39:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:07:00.566 14:39:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:00.566 14:39:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:00.566 14:39:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:00.566 1+0 records in 00:07:00.566 1+0 records out 00:07:00.566 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000567875 s, 7.2 MB/s 00:07:00.566 14:39:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.566 14:39:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:07:00.566 14:39:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.566 14:39:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:00.566 14:39:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:07:00.566 14:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.566 14:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:00.566 14:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:00.566 14:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:00.566 14:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:00.826 14:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:00.826 { 00:07:00.826 "nbd_device": "/dev/nbd0", 00:07:00.826 "bdev_name": "raid" 00:07:00.826 } 00:07:00.826 ]' 00:07:00.826 14:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:00.826 { 00:07:00.826 "nbd_device": "/dev/nbd0", 00:07:00.826 "bdev_name": "raid" 00:07:00.826 } 00:07:00.826 ]' 00:07:00.826 14:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:00.826 14:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:00.826 14:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:00.826 14:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:00.826 14:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:00.826 14:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:00.826 14:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:00.826 14:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:00.826 14:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:00.826 14:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:00.826 14:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:00.826 14:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:00.826 14:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:00.826 14:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:00.826 14:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:00.826 14:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:00.826 14:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:00.826 14:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:00.826 14:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:00.826 14:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:00.826 14:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:00.826 14:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:00.826 14:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:00.826 14:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:00.826 14:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:00.826 4096+0 records in 00:07:00.826 4096+0 records out 00:07:00.826 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0335259 s, 62.6 MB/s 00:07:00.826 14:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:01.085 4096+0 records in 00:07:01.086 4096+0 records out 00:07:01.086 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.213348 s, 9.8 MB/s 00:07:01.086 14:39:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:01.086 14:39:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:01.086 14:39:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:01.086 14:39:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:01.086 14:39:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:01.086 14:39:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:01.086 14:39:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:01.086 128+0 records in 00:07:01.086 128+0 records out 00:07:01.086 65536 bytes (66 kB, 64 KiB) copied, 0.00120062 s, 54.6 MB/s 00:07:01.086 14:39:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:01.086 14:39:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:01.086 14:39:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:01.086 14:39:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:01.086 14:39:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:01.086 14:39:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:01.086 14:39:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:01.086 14:39:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:01.086 2035+0 records in 00:07:01.086 2035+0 records out 00:07:01.086 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0101538 s, 103 MB/s 00:07:01.086 14:39:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:01.086 14:39:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:01.086 14:39:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:01.086 14:39:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:01.086 14:39:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:01.086 14:39:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:01.086 14:39:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:01.086 14:39:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:01.086 456+0 records in 00:07:01.086 456+0 records out 00:07:01.086 233472 bytes (233 kB, 228 KiB) copied, 0.00348683 s, 67.0 MB/s 00:07:01.086 14:39:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:01.086 14:39:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:01.086 14:39:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:01.086 14:39:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:01.086 14:39:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:01.086 14:39:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:01.086 14:39:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:01.086 14:39:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:01.086 14:39:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:01.086 14:39:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:01.086 14:39:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:01.086 14:39:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.086 14:39:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:01.344 14:39:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:01.344 [2024-12-09 14:39:39.400701] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:01.344 14:39:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:01.344 14:39:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:01.344 14:39:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.344 14:39:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.344 14:39:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:01.344 14:39:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:01.344 14:39:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.344 14:39:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:01.344 14:39:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:01.344 14:39:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:01.603 14:39:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:01.603 14:39:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:01.603 14:39:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:01.604 14:39:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:01.604 14:39:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:01.604 14:39:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:01.604 14:39:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:01.604 14:39:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:01.604 14:39:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:01.604 14:39:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:01.604 14:39:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:01.604 14:39:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 61747 00:07:01.604 14:39:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 61747 ']' 00:07:01.604 14:39:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 61747 00:07:01.604 14:39:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:07:01.604 14:39:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.604 14:39:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61747 00:07:01.863 killing process with pid 61747 00:07:01.863 14:39:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:01.863 14:39:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:01.863 14:39:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61747' 00:07:01.863 14:39:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 61747 00:07:01.863 [2024-12-09 14:39:39.737384] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:01.863 [2024-12-09 14:39:39.737492] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:01.863 14:39:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 61747 00:07:01.863 [2024-12-09 14:39:39.737547] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:01.863 [2024-12-09 14:39:39.737559] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:01.863 [2024-12-09 14:39:39.947277] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:03.240 14:39:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:03.240 ************************************ 00:07:03.240 END TEST raid_function_test_concat 00:07:03.240 ************************************ 00:07:03.240 00:07:03.240 real 0m3.945s 00:07:03.240 user 0m4.633s 00:07:03.240 sys 0m0.937s 00:07:03.240 14:39:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.240 14:39:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:03.240 14:39:41 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:03.240 14:39:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:03.240 14:39:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.240 14:39:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:03.240 ************************************ 00:07:03.240 START TEST raid0_resize_test 00:07:03.240 ************************************ 00:07:03.240 14:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:07:03.240 14:39:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:03.240 14:39:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:03.240 14:39:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:03.240 14:39:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:03.240 14:39:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:03.240 14:39:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:03.240 14:39:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:03.240 Process raid pid: 61876 00:07:03.240 14:39:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:03.240 14:39:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=61876 00:07:03.240 14:39:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 61876' 00:07:03.240 14:39:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:03.240 14:39:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 61876 00:07:03.241 14:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 61876 ']' 00:07:03.241 14:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.241 14:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.241 14:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.241 14:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.241 14:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.241 [2024-12-09 14:39:41.234024] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:07:03.241 [2024-12-09 14:39:41.234154] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:03.500 [2024-12-09 14:39:41.407789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.500 [2024-12-09 14:39:41.533829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.759 [2024-12-09 14:39:41.760196] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:03.759 [2024-12-09 14:39:41.760247] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:04.023 14:39:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.023 14:39:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:04.023 14:39:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:04.023 14:39:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.023 14:39:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.023 Base_1 00:07:04.023 14:39:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.023 14:39:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:04.023 14:39:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.023 14:39:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.023 Base_2 00:07:04.023 14:39:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.023 14:39:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:04.023 14:39:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:04.023 14:39:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.023 14:39:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.023 [2024-12-09 14:39:42.107290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:04.023 [2024-12-09 14:39:42.109226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:04.023 [2024-12-09 14:39:42.109385] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:04.023 [2024-12-09 14:39:42.109406] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:04.023 [2024-12-09 14:39:42.109770] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:04.023 [2024-12-09 14:39:42.109935] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:04.023 [2024-12-09 14:39:42.109946] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:04.023 [2024-12-09 14:39:42.110150] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:04.023 14:39:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.023 14:39:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:04.023 14:39:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.023 14:39:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.023 [2024-12-09 14:39:42.119239] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:04.023 [2024-12-09 14:39:42.119267] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:04.023 true 00:07:04.023 14:39:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.023 14:39:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:04.023 14:39:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.023 14:39:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.023 14:39:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:04.023 [2024-12-09 14:39:42.131426] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:04.301 14:39:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.301 14:39:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:04.301 14:39:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:04.301 14:39:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:04.301 14:39:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:04.301 14:39:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:04.301 14:39:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:04.301 14:39:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.301 14:39:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.301 [2024-12-09 14:39:42.183243] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:04.301 [2024-12-09 14:39:42.183290] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:04.301 [2024-12-09 14:39:42.183353] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:04.301 true 00:07:04.301 14:39:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.301 14:39:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:04.301 14:39:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:04.301 14:39:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.301 14:39:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.301 [2024-12-09 14:39:42.199347] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:04.301 14:39:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.301 14:39:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:04.301 14:39:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:04.301 14:39:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:04.301 14:39:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:04.301 14:39:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:04.301 14:39:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 61876 00:07:04.301 14:39:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 61876 ']' 00:07:04.301 14:39:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 61876 00:07:04.301 14:39:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:04.301 14:39:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:04.301 14:39:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61876 00:07:04.301 killing process with pid 61876 00:07:04.301 14:39:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:04.301 14:39:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:04.301 14:39:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61876' 00:07:04.301 14:39:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 61876 00:07:04.301 [2024-12-09 14:39:42.279272] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:04.301 [2024-12-09 14:39:42.279386] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:04.301 14:39:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 61876 00:07:04.301 [2024-12-09 14:39:42.279443] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:04.301 [2024-12-09 14:39:42.279454] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:04.301 [2024-12-09 14:39:42.298225] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:05.688 ************************************ 00:07:05.688 END TEST raid0_resize_test 00:07:05.688 ************************************ 00:07:05.688 14:39:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:05.688 00:07:05.688 real 0m2.311s 00:07:05.688 user 0m2.469s 00:07:05.688 sys 0m0.338s 00:07:05.688 14:39:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.688 14:39:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.688 14:39:43 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:05.688 14:39:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:05.688 14:39:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.688 14:39:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:05.688 ************************************ 00:07:05.688 START TEST raid1_resize_test 00:07:05.688 ************************************ 00:07:05.688 14:39:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:07:05.688 14:39:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:05.688 14:39:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:05.688 14:39:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:05.688 14:39:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:05.688 14:39:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:05.688 14:39:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:05.688 14:39:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:05.688 14:39:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:05.688 14:39:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=61938 00:07:05.688 14:39:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:05.688 Process raid pid: 61938 00:07:05.688 14:39:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 61938' 00:07:05.688 14:39:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 61938 00:07:05.688 14:39:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 61938 ']' 00:07:05.688 14:39:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.688 14:39:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.688 14:39:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.688 14:39:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.688 14:39:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.688 [2024-12-09 14:39:43.612653] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:07:05.688 [2024-12-09 14:39:43.612783] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:05.688 [2024-12-09 14:39:43.785635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.947 [2024-12-09 14:39:43.907782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.206 [2024-12-09 14:39:44.118469] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:06.206 [2024-12-09 14:39:44.118521] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:06.465 14:39:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.465 14:39:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:06.465 14:39:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:06.465 14:39:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.465 14:39:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.465 Base_1 00:07:06.465 14:39:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.465 14:39:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:06.465 14:39:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.465 14:39:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.465 Base_2 00:07:06.465 14:39:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.465 14:39:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:06.465 14:39:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:06.465 14:39:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.465 14:39:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.465 [2024-12-09 14:39:44.481053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:06.465 [2024-12-09 14:39:44.482976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:06.465 [2024-12-09 14:39:44.483036] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:06.465 [2024-12-09 14:39:44.483048] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:06.465 [2024-12-09 14:39:44.483301] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:06.465 [2024-12-09 14:39:44.483420] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:06.465 [2024-12-09 14:39:44.483429] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:06.465 [2024-12-09 14:39:44.483556] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:06.465 14:39:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.465 14:39:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:06.465 14:39:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.465 14:39:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.465 [2024-12-09 14:39:44.492996] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:06.466 [2024-12-09 14:39:44.493063] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:06.466 true 00:07:06.466 14:39:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.466 14:39:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:06.466 14:39:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:06.466 14:39:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.466 14:39:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.466 [2024-12-09 14:39:44.509150] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:06.466 14:39:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.466 14:39:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:06.466 14:39:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:06.466 14:39:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:06.466 14:39:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:06.466 14:39:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:06.466 14:39:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:06.466 14:39:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.466 14:39:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.466 [2024-12-09 14:39:44.560923] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:06.466 [2024-12-09 14:39:44.560996] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:06.466 [2024-12-09 14:39:44.561060] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:06.466 true 00:07:06.466 14:39:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.466 14:39:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:06.466 14:39:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:06.466 14:39:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.466 14:39:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.466 [2024-12-09 14:39:44.577061] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:06.724 14:39:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.724 14:39:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:06.724 14:39:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:06.724 14:39:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:06.724 14:39:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:06.725 14:39:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:06.725 14:39:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 61938 00:07:06.725 14:39:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 61938 ']' 00:07:06.725 14:39:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 61938 00:07:06.725 14:39:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:06.725 14:39:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.725 14:39:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61938 00:07:06.725 14:39:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.725 14:39:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.725 14:39:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61938' 00:07:06.725 killing process with pid 61938 00:07:06.725 14:39:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 61938 00:07:06.725 [2024-12-09 14:39:44.647322] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:06.725 14:39:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 61938 00:07:06.725 [2024-12-09 14:39:44.647522] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:06.725 [2024-12-09 14:39:44.648047] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:06.725 [2024-12-09 14:39:44.648143] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:06.725 [2024-12-09 14:39:44.665349] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:08.102 14:39:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:08.102 00:07:08.102 real 0m2.290s 00:07:08.102 user 0m2.428s 00:07:08.102 sys 0m0.345s 00:07:08.102 14:39:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.102 14:39:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.102 ************************************ 00:07:08.102 END TEST raid1_resize_test 00:07:08.102 ************************************ 00:07:08.102 14:39:45 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:08.102 14:39:45 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:08.102 14:39:45 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:08.102 14:39:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:08.102 14:39:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.102 14:39:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:08.102 ************************************ 00:07:08.102 START TEST raid_state_function_test 00:07:08.102 ************************************ 00:07:08.102 14:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:07:08.102 14:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:08.102 14:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:08.102 14:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:08.102 14:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:08.102 14:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:08.102 14:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:08.102 14:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:08.102 14:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:08.102 14:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:08.102 14:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:08.102 14:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:08.102 14:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:08.102 14:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:08.102 14:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:08.102 14:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:08.102 14:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:08.102 14:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:08.102 14:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:08.102 14:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:08.102 14:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:08.102 14:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:08.102 14:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:08.102 14:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:08.102 14:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61995 00:07:08.102 Process raid pid: 61995 00:07:08.102 14:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:08.102 14:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61995' 00:07:08.102 14:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61995 00:07:08.102 14:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61995 ']' 00:07:08.102 14:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.102 14:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.102 14:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.102 14:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.103 14:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.103 [2024-12-09 14:39:45.973528] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:07:08.103 [2024-12-09 14:39:45.973667] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:08.103 [2024-12-09 14:39:46.147860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.361 [2024-12-09 14:39:46.267951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.361 [2024-12-09 14:39:46.476306] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:08.361 [2024-12-09 14:39:46.476343] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:08.929 14:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.929 14:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:08.929 14:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:08.929 14:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.929 14:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.929 [2024-12-09 14:39:46.841161] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:08.929 [2024-12-09 14:39:46.841309] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:08.929 [2024-12-09 14:39:46.841325] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:08.929 [2024-12-09 14:39:46.841335] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:08.929 14:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.929 14:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:08.929 14:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:08.929 14:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:08.929 14:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:08.929 14:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:08.929 14:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:08.929 14:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:08.929 14:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:08.929 14:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:08.929 14:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:08.929 14:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:08.929 14:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.929 14:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.929 14:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.929 14:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.929 14:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:08.929 "name": "Existed_Raid", 00:07:08.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:08.929 "strip_size_kb": 64, 00:07:08.929 "state": "configuring", 00:07:08.929 "raid_level": "raid0", 00:07:08.929 "superblock": false, 00:07:08.930 "num_base_bdevs": 2, 00:07:08.930 "num_base_bdevs_discovered": 0, 00:07:08.930 "num_base_bdevs_operational": 2, 00:07:08.930 "base_bdevs_list": [ 00:07:08.930 { 00:07:08.930 "name": "BaseBdev1", 00:07:08.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:08.930 "is_configured": false, 00:07:08.930 "data_offset": 0, 00:07:08.930 "data_size": 0 00:07:08.930 }, 00:07:08.930 { 00:07:08.930 "name": "BaseBdev2", 00:07:08.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:08.930 "is_configured": false, 00:07:08.930 "data_offset": 0, 00:07:08.930 "data_size": 0 00:07:08.930 } 00:07:08.930 ] 00:07:08.930 }' 00:07:08.930 14:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:08.930 14:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.188 14:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:09.188 14:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.189 14:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.189 [2024-12-09 14:39:47.300346] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:09.189 [2024-12-09 14:39:47.300452] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:09.189 14:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.189 14:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:09.189 14:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.189 14:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.448 [2024-12-09 14:39:47.312314] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:09.448 [2024-12-09 14:39:47.312413] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:09.448 [2024-12-09 14:39:47.312453] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:09.448 [2024-12-09 14:39:47.312482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:09.448 14:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.448 14:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:09.448 14:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.448 14:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.448 [2024-12-09 14:39:47.362715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:09.448 BaseBdev1 00:07:09.448 14:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.448 14:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:09.448 14:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:09.448 14:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:09.448 14:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:09.448 14:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:09.448 14:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:09.448 14:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:09.448 14:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.448 14:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.448 14:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.448 14:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:09.448 14:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.448 14:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.448 [ 00:07:09.448 { 00:07:09.448 "name": "BaseBdev1", 00:07:09.448 "aliases": [ 00:07:09.448 "98b60cc2-0387-42b4-be23-24b0a7456ba8" 00:07:09.448 ], 00:07:09.448 "product_name": "Malloc disk", 00:07:09.448 "block_size": 512, 00:07:09.448 "num_blocks": 65536, 00:07:09.448 "uuid": "98b60cc2-0387-42b4-be23-24b0a7456ba8", 00:07:09.448 "assigned_rate_limits": { 00:07:09.448 "rw_ios_per_sec": 0, 00:07:09.448 "rw_mbytes_per_sec": 0, 00:07:09.448 "r_mbytes_per_sec": 0, 00:07:09.448 "w_mbytes_per_sec": 0 00:07:09.448 }, 00:07:09.448 "claimed": true, 00:07:09.448 "claim_type": "exclusive_write", 00:07:09.448 "zoned": false, 00:07:09.448 "supported_io_types": { 00:07:09.448 "read": true, 00:07:09.448 "write": true, 00:07:09.448 "unmap": true, 00:07:09.448 "flush": true, 00:07:09.448 "reset": true, 00:07:09.448 "nvme_admin": false, 00:07:09.448 "nvme_io": false, 00:07:09.448 "nvme_io_md": false, 00:07:09.448 "write_zeroes": true, 00:07:09.448 "zcopy": true, 00:07:09.448 "get_zone_info": false, 00:07:09.448 "zone_management": false, 00:07:09.448 "zone_append": false, 00:07:09.448 "compare": false, 00:07:09.448 "compare_and_write": false, 00:07:09.448 "abort": true, 00:07:09.448 "seek_hole": false, 00:07:09.448 "seek_data": false, 00:07:09.448 "copy": true, 00:07:09.448 "nvme_iov_md": false 00:07:09.448 }, 00:07:09.448 "memory_domains": [ 00:07:09.448 { 00:07:09.448 "dma_device_id": "system", 00:07:09.448 "dma_device_type": 1 00:07:09.448 }, 00:07:09.448 { 00:07:09.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.448 "dma_device_type": 2 00:07:09.448 } 00:07:09.448 ], 00:07:09.448 "driver_specific": {} 00:07:09.448 } 00:07:09.448 ] 00:07:09.448 14:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.448 14:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:09.448 14:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:09.448 14:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:09.448 14:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:09.448 14:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:09.448 14:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:09.448 14:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:09.448 14:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:09.448 14:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:09.448 14:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:09.448 14:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:09.448 14:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:09.448 14:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.448 14:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.448 14:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.448 14:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.449 14:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:09.449 "name": "Existed_Raid", 00:07:09.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:09.449 "strip_size_kb": 64, 00:07:09.449 "state": "configuring", 00:07:09.449 "raid_level": "raid0", 00:07:09.449 "superblock": false, 00:07:09.449 "num_base_bdevs": 2, 00:07:09.449 "num_base_bdevs_discovered": 1, 00:07:09.449 "num_base_bdevs_operational": 2, 00:07:09.449 "base_bdevs_list": [ 00:07:09.449 { 00:07:09.449 "name": "BaseBdev1", 00:07:09.449 "uuid": "98b60cc2-0387-42b4-be23-24b0a7456ba8", 00:07:09.449 "is_configured": true, 00:07:09.449 "data_offset": 0, 00:07:09.449 "data_size": 65536 00:07:09.449 }, 00:07:09.449 { 00:07:09.449 "name": "BaseBdev2", 00:07:09.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:09.449 "is_configured": false, 00:07:09.449 "data_offset": 0, 00:07:09.449 "data_size": 0 00:07:09.449 } 00:07:09.449 ] 00:07:09.449 }' 00:07:09.449 14:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:09.449 14:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.708 14:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:09.708 14:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.708 14:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.967 [2024-12-09 14:39:47.830045] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:09.967 [2024-12-09 14:39:47.830167] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:09.967 14:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.967 14:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:09.967 14:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.967 14:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.967 [2024-12-09 14:39:47.838095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:09.967 [2024-12-09 14:39:47.840079] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:09.967 [2024-12-09 14:39:47.840159] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:09.967 14:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.967 14:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:09.967 14:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:09.967 14:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:09.967 14:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:09.967 14:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:09.967 14:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:09.967 14:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:09.967 14:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:09.967 14:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:09.967 14:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:09.967 14:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:09.967 14:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:09.967 14:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.967 14:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.967 14:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.967 14:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:09.967 14:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.967 14:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:09.967 "name": "Existed_Raid", 00:07:09.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:09.967 "strip_size_kb": 64, 00:07:09.967 "state": "configuring", 00:07:09.967 "raid_level": "raid0", 00:07:09.967 "superblock": false, 00:07:09.967 "num_base_bdevs": 2, 00:07:09.967 "num_base_bdevs_discovered": 1, 00:07:09.967 "num_base_bdevs_operational": 2, 00:07:09.967 "base_bdevs_list": [ 00:07:09.967 { 00:07:09.967 "name": "BaseBdev1", 00:07:09.967 "uuid": "98b60cc2-0387-42b4-be23-24b0a7456ba8", 00:07:09.967 "is_configured": true, 00:07:09.967 "data_offset": 0, 00:07:09.967 "data_size": 65536 00:07:09.967 }, 00:07:09.967 { 00:07:09.967 "name": "BaseBdev2", 00:07:09.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:09.967 "is_configured": false, 00:07:09.967 "data_offset": 0, 00:07:09.967 "data_size": 0 00:07:09.967 } 00:07:09.967 ] 00:07:09.967 }' 00:07:09.967 14:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:09.967 14:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.227 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:10.227 14:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.227 14:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.227 [2024-12-09 14:39:48.318949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:10.227 [2024-12-09 14:39:48.319112] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:10.227 [2024-12-09 14:39:48.319143] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:10.227 [2024-12-09 14:39:48.319569] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:10.227 [2024-12-09 14:39:48.319858] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:10.227 [2024-12-09 14:39:48.319921] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:10.227 [2024-12-09 14:39:48.320275] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:10.227 BaseBdev2 00:07:10.227 14:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.227 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:10.227 14:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:10.227 14:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:10.227 14:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:10.227 14:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:10.227 14:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:10.227 14:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:10.227 14:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.227 14:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.227 14:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.227 14:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:10.227 14:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.227 14:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.487 [ 00:07:10.487 { 00:07:10.487 "name": "BaseBdev2", 00:07:10.487 "aliases": [ 00:07:10.487 "1484280e-c46a-4fb2-9d19-d4a670b69f3e" 00:07:10.487 ], 00:07:10.487 "product_name": "Malloc disk", 00:07:10.487 "block_size": 512, 00:07:10.487 "num_blocks": 65536, 00:07:10.487 "uuid": "1484280e-c46a-4fb2-9d19-d4a670b69f3e", 00:07:10.487 "assigned_rate_limits": { 00:07:10.487 "rw_ios_per_sec": 0, 00:07:10.487 "rw_mbytes_per_sec": 0, 00:07:10.487 "r_mbytes_per_sec": 0, 00:07:10.487 "w_mbytes_per_sec": 0 00:07:10.487 }, 00:07:10.487 "claimed": true, 00:07:10.487 "claim_type": "exclusive_write", 00:07:10.487 "zoned": false, 00:07:10.487 "supported_io_types": { 00:07:10.487 "read": true, 00:07:10.487 "write": true, 00:07:10.487 "unmap": true, 00:07:10.487 "flush": true, 00:07:10.487 "reset": true, 00:07:10.487 "nvme_admin": false, 00:07:10.487 "nvme_io": false, 00:07:10.487 "nvme_io_md": false, 00:07:10.487 "write_zeroes": true, 00:07:10.487 "zcopy": true, 00:07:10.487 "get_zone_info": false, 00:07:10.487 "zone_management": false, 00:07:10.487 "zone_append": false, 00:07:10.487 "compare": false, 00:07:10.487 "compare_and_write": false, 00:07:10.487 "abort": true, 00:07:10.487 "seek_hole": false, 00:07:10.487 "seek_data": false, 00:07:10.487 "copy": true, 00:07:10.487 "nvme_iov_md": false 00:07:10.487 }, 00:07:10.487 "memory_domains": [ 00:07:10.487 { 00:07:10.487 "dma_device_id": "system", 00:07:10.487 "dma_device_type": 1 00:07:10.487 }, 00:07:10.487 { 00:07:10.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.487 "dma_device_type": 2 00:07:10.487 } 00:07:10.487 ], 00:07:10.487 "driver_specific": {} 00:07:10.487 } 00:07:10.487 ] 00:07:10.487 14:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.487 14:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:10.487 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:10.487 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:10.487 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:10.487 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:10.487 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:10.487 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:10.487 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:10.487 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:10.487 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:10.487 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:10.487 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:10.487 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:10.487 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.487 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:10.487 14:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.487 14:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.487 14:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.487 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:10.487 "name": "Existed_Raid", 00:07:10.487 "uuid": "98b90ff6-93ea-4db1-b8f5-bbe56955a9ba", 00:07:10.487 "strip_size_kb": 64, 00:07:10.487 "state": "online", 00:07:10.487 "raid_level": "raid0", 00:07:10.487 "superblock": false, 00:07:10.487 "num_base_bdevs": 2, 00:07:10.487 "num_base_bdevs_discovered": 2, 00:07:10.487 "num_base_bdevs_operational": 2, 00:07:10.487 "base_bdevs_list": [ 00:07:10.487 { 00:07:10.487 "name": "BaseBdev1", 00:07:10.487 "uuid": "98b60cc2-0387-42b4-be23-24b0a7456ba8", 00:07:10.487 "is_configured": true, 00:07:10.487 "data_offset": 0, 00:07:10.487 "data_size": 65536 00:07:10.487 }, 00:07:10.487 { 00:07:10.487 "name": "BaseBdev2", 00:07:10.487 "uuid": "1484280e-c46a-4fb2-9d19-d4a670b69f3e", 00:07:10.487 "is_configured": true, 00:07:10.487 "data_offset": 0, 00:07:10.487 "data_size": 65536 00:07:10.487 } 00:07:10.487 ] 00:07:10.487 }' 00:07:10.487 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:10.487 14:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.746 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:10.747 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:10.747 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:10.747 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:10.747 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:10.747 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:10.747 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:10.747 14:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.747 14:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.747 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:10.747 [2024-12-09 14:39:48.822478] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:10.747 14:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.747 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:10.747 "name": "Existed_Raid", 00:07:10.747 "aliases": [ 00:07:10.747 "98b90ff6-93ea-4db1-b8f5-bbe56955a9ba" 00:07:10.747 ], 00:07:10.747 "product_name": "Raid Volume", 00:07:10.747 "block_size": 512, 00:07:10.747 "num_blocks": 131072, 00:07:10.747 "uuid": "98b90ff6-93ea-4db1-b8f5-bbe56955a9ba", 00:07:10.747 "assigned_rate_limits": { 00:07:10.747 "rw_ios_per_sec": 0, 00:07:10.747 "rw_mbytes_per_sec": 0, 00:07:10.747 "r_mbytes_per_sec": 0, 00:07:10.747 "w_mbytes_per_sec": 0 00:07:10.747 }, 00:07:10.747 "claimed": false, 00:07:10.747 "zoned": false, 00:07:10.747 "supported_io_types": { 00:07:10.747 "read": true, 00:07:10.747 "write": true, 00:07:10.747 "unmap": true, 00:07:10.747 "flush": true, 00:07:10.747 "reset": true, 00:07:10.747 "nvme_admin": false, 00:07:10.747 "nvme_io": false, 00:07:10.747 "nvme_io_md": false, 00:07:10.747 "write_zeroes": true, 00:07:10.747 "zcopy": false, 00:07:10.747 "get_zone_info": false, 00:07:10.747 "zone_management": false, 00:07:10.747 "zone_append": false, 00:07:10.747 "compare": false, 00:07:10.747 "compare_and_write": false, 00:07:10.747 "abort": false, 00:07:10.747 "seek_hole": false, 00:07:10.747 "seek_data": false, 00:07:10.747 "copy": false, 00:07:10.747 "nvme_iov_md": false 00:07:10.747 }, 00:07:10.747 "memory_domains": [ 00:07:10.747 { 00:07:10.747 "dma_device_id": "system", 00:07:10.747 "dma_device_type": 1 00:07:10.747 }, 00:07:10.747 { 00:07:10.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.747 "dma_device_type": 2 00:07:10.747 }, 00:07:10.747 { 00:07:10.747 "dma_device_id": "system", 00:07:10.747 "dma_device_type": 1 00:07:10.747 }, 00:07:10.747 { 00:07:10.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.747 "dma_device_type": 2 00:07:10.747 } 00:07:10.747 ], 00:07:10.747 "driver_specific": { 00:07:10.747 "raid": { 00:07:10.747 "uuid": "98b90ff6-93ea-4db1-b8f5-bbe56955a9ba", 00:07:10.747 "strip_size_kb": 64, 00:07:10.747 "state": "online", 00:07:10.747 "raid_level": "raid0", 00:07:10.747 "superblock": false, 00:07:10.747 "num_base_bdevs": 2, 00:07:10.747 "num_base_bdevs_discovered": 2, 00:07:10.747 "num_base_bdevs_operational": 2, 00:07:10.747 "base_bdevs_list": [ 00:07:10.747 { 00:07:10.747 "name": "BaseBdev1", 00:07:10.747 "uuid": "98b60cc2-0387-42b4-be23-24b0a7456ba8", 00:07:10.747 "is_configured": true, 00:07:10.747 "data_offset": 0, 00:07:10.747 "data_size": 65536 00:07:10.747 }, 00:07:10.747 { 00:07:10.747 "name": "BaseBdev2", 00:07:10.747 "uuid": "1484280e-c46a-4fb2-9d19-d4a670b69f3e", 00:07:10.747 "is_configured": true, 00:07:10.747 "data_offset": 0, 00:07:10.747 "data_size": 65536 00:07:10.747 } 00:07:10.747 ] 00:07:10.747 } 00:07:10.747 } 00:07:10.747 }' 00:07:10.747 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:11.007 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:11.007 BaseBdev2' 00:07:11.007 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:11.007 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:11.007 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:11.007 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:11.007 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:11.007 14:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.007 14:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.007 14:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.007 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:11.007 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:11.007 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:11.007 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:11.007 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:11.007 14:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.007 14:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.007 14:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.007 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:11.007 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:11.007 14:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:11.007 14:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.007 14:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.007 [2024-12-09 14:39:49.001981] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:11.007 [2024-12-09 14:39:49.002025] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:11.007 [2024-12-09 14:39:49.002079] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:11.007 14:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.007 14:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:11.007 14:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:11.007 14:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:11.007 14:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:11.007 14:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:11.007 14:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:11.007 14:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:11.007 14:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:11.007 14:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:11.007 14:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:11.007 14:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:11.007 14:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:11.007 14:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:11.007 14:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:11.007 14:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:11.007 14:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.007 14:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:11.007 14:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.007 14:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.007 14:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.267 14:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:11.267 "name": "Existed_Raid", 00:07:11.267 "uuid": "98b90ff6-93ea-4db1-b8f5-bbe56955a9ba", 00:07:11.267 "strip_size_kb": 64, 00:07:11.267 "state": "offline", 00:07:11.267 "raid_level": "raid0", 00:07:11.267 "superblock": false, 00:07:11.267 "num_base_bdevs": 2, 00:07:11.267 "num_base_bdevs_discovered": 1, 00:07:11.267 "num_base_bdevs_operational": 1, 00:07:11.267 "base_bdevs_list": [ 00:07:11.267 { 00:07:11.267 "name": null, 00:07:11.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:11.267 "is_configured": false, 00:07:11.267 "data_offset": 0, 00:07:11.267 "data_size": 65536 00:07:11.267 }, 00:07:11.267 { 00:07:11.267 "name": "BaseBdev2", 00:07:11.267 "uuid": "1484280e-c46a-4fb2-9d19-d4a670b69f3e", 00:07:11.267 "is_configured": true, 00:07:11.267 "data_offset": 0, 00:07:11.267 "data_size": 65536 00:07:11.267 } 00:07:11.267 ] 00:07:11.267 }' 00:07:11.267 14:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:11.267 14:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.526 14:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:11.526 14:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:11.526 14:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.526 14:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:11.526 14:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.526 14:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.526 14:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.526 14:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:11.526 14:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:11.526 14:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:11.526 14:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.526 14:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.526 [2024-12-09 14:39:49.580395] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:11.526 [2024-12-09 14:39:49.580507] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:11.786 14:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.786 14:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:11.786 14:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:11.786 14:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.786 14:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.786 14:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:11.786 14:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.786 14:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.786 14:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:11.786 14:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:11.786 14:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:11.786 14:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61995 00:07:11.786 14:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61995 ']' 00:07:11.786 14:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61995 00:07:11.786 14:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:11.786 14:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.786 14:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61995 00:07:11.786 killing process with pid 61995 00:07:11.786 14:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:11.786 14:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:11.786 14:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61995' 00:07:11.786 14:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61995 00:07:11.786 [2024-12-09 14:39:49.757927] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:11.786 14:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61995 00:07:11.786 [2024-12-09 14:39:49.774886] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:13.165 14:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:13.165 00:07:13.165 real 0m5.049s 00:07:13.165 user 0m7.269s 00:07:13.165 sys 0m0.793s 00:07:13.165 ************************************ 00:07:13.165 END TEST raid_state_function_test 00:07:13.165 ************************************ 00:07:13.165 14:39:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.165 14:39:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.165 14:39:50 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:13.165 14:39:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:13.165 14:39:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.165 14:39:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:13.165 ************************************ 00:07:13.165 START TEST raid_state_function_test_sb 00:07:13.165 ************************************ 00:07:13.165 14:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:07:13.165 14:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:13.165 14:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:13.165 14:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:13.165 14:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:13.165 14:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:13.165 14:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:13.165 14:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:13.165 14:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:13.165 14:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:13.165 14:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:13.165 14:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:13.165 14:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:13.165 14:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:13.165 14:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:13.165 14:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:13.165 14:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:13.165 14:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:13.165 14:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:13.165 14:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:13.165 14:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:13.165 14:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:13.165 14:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:13.165 14:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:13.165 14:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62248 00:07:13.165 14:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:13.165 14:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62248' 00:07:13.165 Process raid pid: 62248 00:07:13.165 14:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62248 00:07:13.165 14:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62248 ']' 00:07:13.165 14:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.165 14:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.165 14:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.165 14:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.165 14:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.165 [2024-12-09 14:39:51.083523] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:07:13.165 [2024-12-09 14:39:51.083732] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.165 [2024-12-09 14:39:51.258622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.424 [2024-12-09 14:39:51.382220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.684 [2024-12-09 14:39:51.593062] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:13.684 [2024-12-09 14:39:51.593175] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:13.943 14:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.943 14:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:13.943 14:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:13.943 14:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.943 14:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.943 [2024-12-09 14:39:51.985641] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:13.943 [2024-12-09 14:39:51.985701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:13.943 [2024-12-09 14:39:51.985712] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:13.943 [2024-12-09 14:39:51.985722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:13.943 14:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.943 14:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:13.943 14:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:13.943 14:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:13.943 14:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:13.943 14:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:13.943 14:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:13.943 14:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:13.943 14:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:13.943 14:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:13.943 14:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:13.943 14:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.943 14:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:13.943 14:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.943 14:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.943 14:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.943 14:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:13.943 "name": "Existed_Raid", 00:07:13.943 "uuid": "75e91287-6d64-422d-b3dd-17ba5b4c15aa", 00:07:13.943 "strip_size_kb": 64, 00:07:13.943 "state": "configuring", 00:07:13.943 "raid_level": "raid0", 00:07:13.943 "superblock": true, 00:07:13.943 "num_base_bdevs": 2, 00:07:13.943 "num_base_bdevs_discovered": 0, 00:07:13.943 "num_base_bdevs_operational": 2, 00:07:13.943 "base_bdevs_list": [ 00:07:13.943 { 00:07:13.943 "name": "BaseBdev1", 00:07:13.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.943 "is_configured": false, 00:07:13.943 "data_offset": 0, 00:07:13.943 "data_size": 0 00:07:13.943 }, 00:07:13.943 { 00:07:13.943 "name": "BaseBdev2", 00:07:13.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.943 "is_configured": false, 00:07:13.943 "data_offset": 0, 00:07:13.943 "data_size": 0 00:07:13.944 } 00:07:13.944 ] 00:07:13.944 }' 00:07:13.944 14:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:13.944 14:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.512 14:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:14.512 14:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.512 14:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.512 [2024-12-09 14:39:52.456776] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:14.512 [2024-12-09 14:39:52.456872] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:14.512 14:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.512 14:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:14.512 14:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.512 14:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.512 [2024-12-09 14:39:52.468749] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:14.512 [2024-12-09 14:39:52.468831] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:14.512 [2024-12-09 14:39:52.468860] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:14.512 [2024-12-09 14:39:52.468886] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:14.512 14:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.512 14:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:14.512 14:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.512 14:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.512 [2024-12-09 14:39:52.517259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:14.512 BaseBdev1 00:07:14.512 14:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.512 14:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:14.513 14:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:14.513 14:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:14.513 14:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:14.513 14:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:14.513 14:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:14.513 14:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:14.513 14:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.513 14:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.513 14:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.513 14:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:14.513 14:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.513 14:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.513 [ 00:07:14.513 { 00:07:14.513 "name": "BaseBdev1", 00:07:14.513 "aliases": [ 00:07:14.513 "050a7f14-ef2e-4ecc-8d63-6d933b1aa904" 00:07:14.513 ], 00:07:14.513 "product_name": "Malloc disk", 00:07:14.513 "block_size": 512, 00:07:14.513 "num_blocks": 65536, 00:07:14.513 "uuid": "050a7f14-ef2e-4ecc-8d63-6d933b1aa904", 00:07:14.513 "assigned_rate_limits": { 00:07:14.513 "rw_ios_per_sec": 0, 00:07:14.513 "rw_mbytes_per_sec": 0, 00:07:14.513 "r_mbytes_per_sec": 0, 00:07:14.513 "w_mbytes_per_sec": 0 00:07:14.513 }, 00:07:14.513 "claimed": true, 00:07:14.513 "claim_type": "exclusive_write", 00:07:14.513 "zoned": false, 00:07:14.513 "supported_io_types": { 00:07:14.513 "read": true, 00:07:14.513 "write": true, 00:07:14.513 "unmap": true, 00:07:14.513 "flush": true, 00:07:14.513 "reset": true, 00:07:14.513 "nvme_admin": false, 00:07:14.513 "nvme_io": false, 00:07:14.513 "nvme_io_md": false, 00:07:14.513 "write_zeroes": true, 00:07:14.513 "zcopy": true, 00:07:14.513 "get_zone_info": false, 00:07:14.513 "zone_management": false, 00:07:14.513 "zone_append": false, 00:07:14.513 "compare": false, 00:07:14.513 "compare_and_write": false, 00:07:14.513 "abort": true, 00:07:14.513 "seek_hole": false, 00:07:14.513 "seek_data": false, 00:07:14.513 "copy": true, 00:07:14.513 "nvme_iov_md": false 00:07:14.513 }, 00:07:14.513 "memory_domains": [ 00:07:14.513 { 00:07:14.513 "dma_device_id": "system", 00:07:14.513 "dma_device_type": 1 00:07:14.513 }, 00:07:14.513 { 00:07:14.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.513 "dma_device_type": 2 00:07:14.513 } 00:07:14.513 ], 00:07:14.513 "driver_specific": {} 00:07:14.513 } 00:07:14.513 ] 00:07:14.513 14:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.513 14:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:14.513 14:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:14.513 14:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:14.513 14:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:14.513 14:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:14.513 14:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:14.513 14:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:14.513 14:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.513 14:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.513 14:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.513 14:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.513 14:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:14.513 14:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.513 14:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.513 14:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.513 14:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.513 14:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.513 "name": "Existed_Raid", 00:07:14.513 "uuid": "e3479eba-fc24-4d22-8c4c-b36fadeee789", 00:07:14.513 "strip_size_kb": 64, 00:07:14.513 "state": "configuring", 00:07:14.513 "raid_level": "raid0", 00:07:14.513 "superblock": true, 00:07:14.513 "num_base_bdevs": 2, 00:07:14.513 "num_base_bdevs_discovered": 1, 00:07:14.513 "num_base_bdevs_operational": 2, 00:07:14.513 "base_bdevs_list": [ 00:07:14.513 { 00:07:14.513 "name": "BaseBdev1", 00:07:14.513 "uuid": "050a7f14-ef2e-4ecc-8d63-6d933b1aa904", 00:07:14.513 "is_configured": true, 00:07:14.513 "data_offset": 2048, 00:07:14.513 "data_size": 63488 00:07:14.513 }, 00:07:14.513 { 00:07:14.513 "name": "BaseBdev2", 00:07:14.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.513 "is_configured": false, 00:07:14.513 "data_offset": 0, 00:07:14.513 "data_size": 0 00:07:14.513 } 00:07:14.513 ] 00:07:14.513 }' 00:07:14.513 14:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.513 14:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.078 14:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:15.078 14:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.078 14:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.078 [2024-12-09 14:39:52.976535] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:15.078 [2024-12-09 14:39:52.976608] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:15.078 14:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.078 14:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:15.078 14:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.078 14:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.078 [2024-12-09 14:39:52.984542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:15.078 [2024-12-09 14:39:52.986271] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:15.078 [2024-12-09 14:39:52.986310] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:15.078 14:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.078 14:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:15.078 14:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:15.078 14:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:15.078 14:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:15.078 14:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:15.078 14:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:15.078 14:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:15.078 14:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:15.078 14:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.078 14:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.078 14:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.078 14:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.078 14:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.078 14:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.078 14:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.078 14:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:15.078 14:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.078 14:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.078 "name": "Existed_Raid", 00:07:15.078 "uuid": "5df91812-ef65-47f3-84f2-e69963c15b3b", 00:07:15.078 "strip_size_kb": 64, 00:07:15.078 "state": "configuring", 00:07:15.078 "raid_level": "raid0", 00:07:15.078 "superblock": true, 00:07:15.078 "num_base_bdevs": 2, 00:07:15.078 "num_base_bdevs_discovered": 1, 00:07:15.078 "num_base_bdevs_operational": 2, 00:07:15.078 "base_bdevs_list": [ 00:07:15.078 { 00:07:15.078 "name": "BaseBdev1", 00:07:15.078 "uuid": "050a7f14-ef2e-4ecc-8d63-6d933b1aa904", 00:07:15.078 "is_configured": true, 00:07:15.078 "data_offset": 2048, 00:07:15.078 "data_size": 63488 00:07:15.078 }, 00:07:15.078 { 00:07:15.078 "name": "BaseBdev2", 00:07:15.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.078 "is_configured": false, 00:07:15.079 "data_offset": 0, 00:07:15.079 "data_size": 0 00:07:15.079 } 00:07:15.079 ] 00:07:15.079 }' 00:07:15.079 14:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.079 14:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.337 14:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:15.337 14:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.337 14:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.595 [2024-12-09 14:39:53.487700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:15.595 [2024-12-09 14:39:53.487963] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:15.595 [2024-12-09 14:39:53.487980] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:15.595 [2024-12-09 14:39:53.488224] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:15.596 [2024-12-09 14:39:53.488373] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:15.596 [2024-12-09 14:39:53.488386] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:15.596 [2024-12-09 14:39:53.488543] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:15.596 BaseBdev2 00:07:15.596 14:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.596 14:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:15.596 14:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:15.596 14:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:15.596 14:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:15.596 14:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:15.596 14:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:15.596 14:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:15.596 14:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.596 14:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.596 14:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.596 14:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:15.596 14:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.596 14:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.596 [ 00:07:15.596 { 00:07:15.596 "name": "BaseBdev2", 00:07:15.596 "aliases": [ 00:07:15.596 "367eb4cd-5bb7-4150-8e20-dead0ea97dad" 00:07:15.596 ], 00:07:15.596 "product_name": "Malloc disk", 00:07:15.596 "block_size": 512, 00:07:15.596 "num_blocks": 65536, 00:07:15.596 "uuid": "367eb4cd-5bb7-4150-8e20-dead0ea97dad", 00:07:15.596 "assigned_rate_limits": { 00:07:15.596 "rw_ios_per_sec": 0, 00:07:15.596 "rw_mbytes_per_sec": 0, 00:07:15.596 "r_mbytes_per_sec": 0, 00:07:15.596 "w_mbytes_per_sec": 0 00:07:15.596 }, 00:07:15.596 "claimed": true, 00:07:15.596 "claim_type": "exclusive_write", 00:07:15.596 "zoned": false, 00:07:15.596 "supported_io_types": { 00:07:15.596 "read": true, 00:07:15.596 "write": true, 00:07:15.596 "unmap": true, 00:07:15.596 "flush": true, 00:07:15.596 "reset": true, 00:07:15.596 "nvme_admin": false, 00:07:15.596 "nvme_io": false, 00:07:15.596 "nvme_io_md": false, 00:07:15.596 "write_zeroes": true, 00:07:15.596 "zcopy": true, 00:07:15.596 "get_zone_info": false, 00:07:15.596 "zone_management": false, 00:07:15.596 "zone_append": false, 00:07:15.596 "compare": false, 00:07:15.596 "compare_and_write": false, 00:07:15.596 "abort": true, 00:07:15.596 "seek_hole": false, 00:07:15.596 "seek_data": false, 00:07:15.596 "copy": true, 00:07:15.596 "nvme_iov_md": false 00:07:15.596 }, 00:07:15.596 "memory_domains": [ 00:07:15.596 { 00:07:15.596 "dma_device_id": "system", 00:07:15.596 "dma_device_type": 1 00:07:15.596 }, 00:07:15.596 { 00:07:15.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.596 "dma_device_type": 2 00:07:15.596 } 00:07:15.596 ], 00:07:15.596 "driver_specific": {} 00:07:15.596 } 00:07:15.596 ] 00:07:15.596 14:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.596 14:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:15.596 14:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:15.596 14:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:15.596 14:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:15.596 14:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:15.596 14:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:15.596 14:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:15.596 14:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:15.596 14:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:15.596 14:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.596 14:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.596 14:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.596 14:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.596 14:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.596 14:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.596 14:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.596 14:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:15.596 14:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.596 14:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.596 "name": "Existed_Raid", 00:07:15.596 "uuid": "5df91812-ef65-47f3-84f2-e69963c15b3b", 00:07:15.596 "strip_size_kb": 64, 00:07:15.596 "state": "online", 00:07:15.596 "raid_level": "raid0", 00:07:15.596 "superblock": true, 00:07:15.596 "num_base_bdevs": 2, 00:07:15.596 "num_base_bdevs_discovered": 2, 00:07:15.596 "num_base_bdevs_operational": 2, 00:07:15.596 "base_bdevs_list": [ 00:07:15.596 { 00:07:15.596 "name": "BaseBdev1", 00:07:15.596 "uuid": "050a7f14-ef2e-4ecc-8d63-6d933b1aa904", 00:07:15.596 "is_configured": true, 00:07:15.596 "data_offset": 2048, 00:07:15.596 "data_size": 63488 00:07:15.596 }, 00:07:15.596 { 00:07:15.596 "name": "BaseBdev2", 00:07:15.596 "uuid": "367eb4cd-5bb7-4150-8e20-dead0ea97dad", 00:07:15.596 "is_configured": true, 00:07:15.596 "data_offset": 2048, 00:07:15.596 "data_size": 63488 00:07:15.596 } 00:07:15.596 ] 00:07:15.596 }' 00:07:15.596 14:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.596 14:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.856 14:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:15.856 14:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:15.856 14:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:15.856 14:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:15.856 14:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:15.856 14:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:15.856 14:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:15.857 14:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:15.857 14:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.857 14:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.857 [2024-12-09 14:39:53.959257] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:16.119 14:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.119 14:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:16.119 "name": "Existed_Raid", 00:07:16.119 "aliases": [ 00:07:16.119 "5df91812-ef65-47f3-84f2-e69963c15b3b" 00:07:16.119 ], 00:07:16.119 "product_name": "Raid Volume", 00:07:16.119 "block_size": 512, 00:07:16.119 "num_blocks": 126976, 00:07:16.119 "uuid": "5df91812-ef65-47f3-84f2-e69963c15b3b", 00:07:16.119 "assigned_rate_limits": { 00:07:16.119 "rw_ios_per_sec": 0, 00:07:16.119 "rw_mbytes_per_sec": 0, 00:07:16.119 "r_mbytes_per_sec": 0, 00:07:16.119 "w_mbytes_per_sec": 0 00:07:16.119 }, 00:07:16.119 "claimed": false, 00:07:16.119 "zoned": false, 00:07:16.119 "supported_io_types": { 00:07:16.119 "read": true, 00:07:16.119 "write": true, 00:07:16.119 "unmap": true, 00:07:16.119 "flush": true, 00:07:16.119 "reset": true, 00:07:16.119 "nvme_admin": false, 00:07:16.119 "nvme_io": false, 00:07:16.119 "nvme_io_md": false, 00:07:16.119 "write_zeroes": true, 00:07:16.119 "zcopy": false, 00:07:16.119 "get_zone_info": false, 00:07:16.119 "zone_management": false, 00:07:16.119 "zone_append": false, 00:07:16.119 "compare": false, 00:07:16.119 "compare_and_write": false, 00:07:16.119 "abort": false, 00:07:16.119 "seek_hole": false, 00:07:16.119 "seek_data": false, 00:07:16.119 "copy": false, 00:07:16.119 "nvme_iov_md": false 00:07:16.119 }, 00:07:16.119 "memory_domains": [ 00:07:16.119 { 00:07:16.119 "dma_device_id": "system", 00:07:16.119 "dma_device_type": 1 00:07:16.119 }, 00:07:16.119 { 00:07:16.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.119 "dma_device_type": 2 00:07:16.119 }, 00:07:16.119 { 00:07:16.119 "dma_device_id": "system", 00:07:16.119 "dma_device_type": 1 00:07:16.119 }, 00:07:16.119 { 00:07:16.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.119 "dma_device_type": 2 00:07:16.119 } 00:07:16.119 ], 00:07:16.119 "driver_specific": { 00:07:16.119 "raid": { 00:07:16.119 "uuid": "5df91812-ef65-47f3-84f2-e69963c15b3b", 00:07:16.119 "strip_size_kb": 64, 00:07:16.119 "state": "online", 00:07:16.119 "raid_level": "raid0", 00:07:16.119 "superblock": true, 00:07:16.119 "num_base_bdevs": 2, 00:07:16.119 "num_base_bdevs_discovered": 2, 00:07:16.119 "num_base_bdevs_operational": 2, 00:07:16.119 "base_bdevs_list": [ 00:07:16.119 { 00:07:16.119 "name": "BaseBdev1", 00:07:16.119 "uuid": "050a7f14-ef2e-4ecc-8d63-6d933b1aa904", 00:07:16.119 "is_configured": true, 00:07:16.119 "data_offset": 2048, 00:07:16.119 "data_size": 63488 00:07:16.119 }, 00:07:16.119 { 00:07:16.119 "name": "BaseBdev2", 00:07:16.119 "uuid": "367eb4cd-5bb7-4150-8e20-dead0ea97dad", 00:07:16.119 "is_configured": true, 00:07:16.119 "data_offset": 2048, 00:07:16.119 "data_size": 63488 00:07:16.119 } 00:07:16.119 ] 00:07:16.119 } 00:07:16.119 } 00:07:16.120 }' 00:07:16.120 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:16.120 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:16.120 BaseBdev2' 00:07:16.120 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:16.120 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:16.120 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:16.120 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:16.120 14:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.120 14:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.120 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:16.120 14:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.120 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:16.120 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:16.120 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:16.120 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:16.120 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:16.120 14:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.120 14:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.120 14:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.120 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:16.120 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:16.120 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:16.120 14:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.120 14:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.120 [2024-12-09 14:39:54.210617] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:16.120 [2024-12-09 14:39:54.210657] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:16.120 [2024-12-09 14:39:54.210717] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:16.380 14:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.380 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:16.380 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:16.380 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:16.380 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:16.380 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:16.380 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:16.380 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:16.380 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:16.380 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:16.380 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:16.380 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:16.380 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.380 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.380 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.380 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.380 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.380 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:16.380 14:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.380 14:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.380 14:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.380 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:16.380 "name": "Existed_Raid", 00:07:16.380 "uuid": "5df91812-ef65-47f3-84f2-e69963c15b3b", 00:07:16.380 "strip_size_kb": 64, 00:07:16.380 "state": "offline", 00:07:16.380 "raid_level": "raid0", 00:07:16.380 "superblock": true, 00:07:16.380 "num_base_bdevs": 2, 00:07:16.380 "num_base_bdevs_discovered": 1, 00:07:16.380 "num_base_bdevs_operational": 1, 00:07:16.380 "base_bdevs_list": [ 00:07:16.380 { 00:07:16.380 "name": null, 00:07:16.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:16.380 "is_configured": false, 00:07:16.380 "data_offset": 0, 00:07:16.380 "data_size": 63488 00:07:16.380 }, 00:07:16.380 { 00:07:16.380 "name": "BaseBdev2", 00:07:16.380 "uuid": "367eb4cd-5bb7-4150-8e20-dead0ea97dad", 00:07:16.380 "is_configured": true, 00:07:16.380 "data_offset": 2048, 00:07:16.380 "data_size": 63488 00:07:16.380 } 00:07:16.380 ] 00:07:16.380 }' 00:07:16.380 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:16.380 14:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.639 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:16.639 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:16.639 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:16.639 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.640 14:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.640 14:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.899 14:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.899 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:16.899 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:16.899 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:16.899 14:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.899 14:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.899 [2024-12-09 14:39:54.779447] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:16.899 [2024-12-09 14:39:54.779552] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:16.899 14:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.899 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:16.900 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:16.900 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.900 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:16.900 14:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.900 14:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.900 14:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.900 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:16.900 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:16.900 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:16.900 14:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62248 00:07:16.900 14:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62248 ']' 00:07:16.900 14:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62248 00:07:16.900 14:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:16.900 14:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.900 14:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62248 00:07:16.900 14:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:16.900 killing process with pid 62248 00:07:16.900 14:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:16.900 14:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62248' 00:07:16.900 14:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62248 00:07:16.900 [2024-12-09 14:39:54.980997] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:16.900 14:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62248 00:07:16.900 [2024-12-09 14:39:54.999409] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:18.284 14:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:18.284 00:07:18.284 real 0m5.157s 00:07:18.284 user 0m7.457s 00:07:18.284 sys 0m0.825s 00:07:18.284 14:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.284 14:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.284 ************************************ 00:07:18.284 END TEST raid_state_function_test_sb 00:07:18.284 ************************************ 00:07:18.284 14:39:56 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:18.284 14:39:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:18.284 14:39:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.284 14:39:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:18.284 ************************************ 00:07:18.284 START TEST raid_superblock_test 00:07:18.284 ************************************ 00:07:18.284 14:39:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:07:18.284 14:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:18.284 14:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:18.284 14:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:18.284 14:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:18.284 14:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:18.284 14:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:18.284 14:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:18.284 14:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:18.284 14:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:18.284 14:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:18.284 14:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:18.284 14:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:18.284 14:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:18.284 14:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:18.284 14:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:18.284 14:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:18.284 14:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62500 00:07:18.284 14:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:18.284 14:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62500 00:07:18.284 14:39:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62500 ']' 00:07:18.284 14:39:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.284 14:39:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.285 14:39:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.285 14:39:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.285 14:39:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.285 [2024-12-09 14:39:56.305851] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:07:18.285 [2024-12-09 14:39:56.305974] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62500 ] 00:07:18.544 [2024-12-09 14:39:56.459325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.544 [2024-12-09 14:39:56.584799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.803 [2024-12-09 14:39:56.797641] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:18.803 [2024-12-09 14:39:56.797692] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:19.063 14:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.063 14:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:19.063 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:19.063 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:19.063 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:19.063 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:19.063 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:19.063 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:19.063 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:19.063 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:19.063 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:19.063 14:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.063 14:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.323 malloc1 00:07:19.323 14:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.323 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:19.323 14:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.323 14:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.323 [2024-12-09 14:39:57.202394] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:19.323 [2024-12-09 14:39:57.202455] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:19.323 [2024-12-09 14:39:57.202477] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:19.323 [2024-12-09 14:39:57.202486] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:19.323 [2024-12-09 14:39:57.204564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:19.323 [2024-12-09 14:39:57.204608] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:19.323 pt1 00:07:19.323 14:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.323 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:19.323 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:19.323 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:19.323 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:19.323 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:19.323 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:19.323 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:19.323 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:19.323 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:19.323 14:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.323 14:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.323 malloc2 00:07:19.323 14:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.323 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:19.323 14:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.323 14:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.323 [2024-12-09 14:39:57.257410] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:19.323 [2024-12-09 14:39:57.257485] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:19.323 [2024-12-09 14:39:57.257510] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:19.323 [2024-12-09 14:39:57.257518] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:19.323 [2024-12-09 14:39:57.259662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:19.323 [2024-12-09 14:39:57.259697] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:19.323 pt2 00:07:19.323 14:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.323 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:19.323 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:19.323 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:19.323 14:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.323 14:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.323 [2024-12-09 14:39:57.269473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:19.323 [2024-12-09 14:39:57.271383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:19.323 [2024-12-09 14:39:57.271579] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:19.323 [2024-12-09 14:39:57.271597] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:19.323 [2024-12-09 14:39:57.271922] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:19.323 [2024-12-09 14:39:57.272119] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:19.323 [2024-12-09 14:39:57.272139] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:19.323 [2024-12-09 14:39:57.272337] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:19.323 14:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.323 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:19.324 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:19.324 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:19.324 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:19.324 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:19.324 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:19.324 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.324 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.324 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.324 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.324 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.324 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:19.324 14:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.324 14:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.324 14:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.324 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.324 "name": "raid_bdev1", 00:07:19.324 "uuid": "518a10ca-6399-492f-b814-16ac77a4b1cc", 00:07:19.324 "strip_size_kb": 64, 00:07:19.324 "state": "online", 00:07:19.324 "raid_level": "raid0", 00:07:19.324 "superblock": true, 00:07:19.324 "num_base_bdevs": 2, 00:07:19.324 "num_base_bdevs_discovered": 2, 00:07:19.324 "num_base_bdevs_operational": 2, 00:07:19.324 "base_bdevs_list": [ 00:07:19.324 { 00:07:19.324 "name": "pt1", 00:07:19.324 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:19.324 "is_configured": true, 00:07:19.324 "data_offset": 2048, 00:07:19.324 "data_size": 63488 00:07:19.324 }, 00:07:19.324 { 00:07:19.324 "name": "pt2", 00:07:19.324 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:19.324 "is_configured": true, 00:07:19.324 "data_offset": 2048, 00:07:19.324 "data_size": 63488 00:07:19.324 } 00:07:19.324 ] 00:07:19.324 }' 00:07:19.324 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.324 14:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.894 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:19.894 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:19.894 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:19.894 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:19.894 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:19.894 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:19.894 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:19.894 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:19.894 14:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.894 14:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.894 [2024-12-09 14:39:57.728985] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:19.894 14:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.894 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:19.894 "name": "raid_bdev1", 00:07:19.894 "aliases": [ 00:07:19.894 "518a10ca-6399-492f-b814-16ac77a4b1cc" 00:07:19.894 ], 00:07:19.894 "product_name": "Raid Volume", 00:07:19.894 "block_size": 512, 00:07:19.894 "num_blocks": 126976, 00:07:19.894 "uuid": "518a10ca-6399-492f-b814-16ac77a4b1cc", 00:07:19.894 "assigned_rate_limits": { 00:07:19.894 "rw_ios_per_sec": 0, 00:07:19.894 "rw_mbytes_per_sec": 0, 00:07:19.894 "r_mbytes_per_sec": 0, 00:07:19.894 "w_mbytes_per_sec": 0 00:07:19.894 }, 00:07:19.894 "claimed": false, 00:07:19.894 "zoned": false, 00:07:19.894 "supported_io_types": { 00:07:19.894 "read": true, 00:07:19.894 "write": true, 00:07:19.894 "unmap": true, 00:07:19.894 "flush": true, 00:07:19.894 "reset": true, 00:07:19.894 "nvme_admin": false, 00:07:19.894 "nvme_io": false, 00:07:19.894 "nvme_io_md": false, 00:07:19.894 "write_zeroes": true, 00:07:19.894 "zcopy": false, 00:07:19.894 "get_zone_info": false, 00:07:19.894 "zone_management": false, 00:07:19.894 "zone_append": false, 00:07:19.894 "compare": false, 00:07:19.894 "compare_and_write": false, 00:07:19.894 "abort": false, 00:07:19.894 "seek_hole": false, 00:07:19.894 "seek_data": false, 00:07:19.894 "copy": false, 00:07:19.894 "nvme_iov_md": false 00:07:19.894 }, 00:07:19.894 "memory_domains": [ 00:07:19.894 { 00:07:19.894 "dma_device_id": "system", 00:07:19.894 "dma_device_type": 1 00:07:19.894 }, 00:07:19.894 { 00:07:19.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.894 "dma_device_type": 2 00:07:19.894 }, 00:07:19.894 { 00:07:19.894 "dma_device_id": "system", 00:07:19.894 "dma_device_type": 1 00:07:19.894 }, 00:07:19.894 { 00:07:19.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.894 "dma_device_type": 2 00:07:19.894 } 00:07:19.894 ], 00:07:19.894 "driver_specific": { 00:07:19.894 "raid": { 00:07:19.894 "uuid": "518a10ca-6399-492f-b814-16ac77a4b1cc", 00:07:19.894 "strip_size_kb": 64, 00:07:19.894 "state": "online", 00:07:19.894 "raid_level": "raid0", 00:07:19.894 "superblock": true, 00:07:19.894 "num_base_bdevs": 2, 00:07:19.895 "num_base_bdevs_discovered": 2, 00:07:19.895 "num_base_bdevs_operational": 2, 00:07:19.895 "base_bdevs_list": [ 00:07:19.895 { 00:07:19.895 "name": "pt1", 00:07:19.895 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:19.895 "is_configured": true, 00:07:19.895 "data_offset": 2048, 00:07:19.895 "data_size": 63488 00:07:19.895 }, 00:07:19.895 { 00:07:19.895 "name": "pt2", 00:07:19.895 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:19.895 "is_configured": true, 00:07:19.895 "data_offset": 2048, 00:07:19.895 "data_size": 63488 00:07:19.895 } 00:07:19.895 ] 00:07:19.895 } 00:07:19.895 } 00:07:19.895 }' 00:07:19.895 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:19.895 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:19.895 pt2' 00:07:19.895 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:19.895 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:19.895 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:19.895 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:19.895 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:19.895 14:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.895 14:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.895 14:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.895 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:19.895 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:19.895 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:19.895 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:19.895 14:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.895 14:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.895 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:19.895 14:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.895 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:19.895 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:19.895 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:19.895 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:19.895 14:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.895 14:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.895 [2024-12-09 14:39:57.952532] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:19.895 14:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.895 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=518a10ca-6399-492f-b814-16ac77a4b1cc 00:07:19.895 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 518a10ca-6399-492f-b814-16ac77a4b1cc ']' 00:07:19.895 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:19.895 14:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.895 14:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.895 [2024-12-09 14:39:57.980197] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:19.895 [2024-12-09 14:39:57.980226] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:19.895 [2024-12-09 14:39:57.980313] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:19.895 [2024-12-09 14:39:57.980363] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:19.895 [2024-12-09 14:39:57.980374] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:19.895 14:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.895 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.895 14:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.895 14:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.895 14:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:19.895 14:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.156 [2024-12-09 14:39:58.116039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:20.156 [2024-12-09 14:39:58.118013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:20.156 [2024-12-09 14:39:58.118090] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:20.156 [2024-12-09 14:39:58.118146] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:20.156 [2024-12-09 14:39:58.118161] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:20.156 [2024-12-09 14:39:58.118174] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:20.156 request: 00:07:20.156 { 00:07:20.156 "name": "raid_bdev1", 00:07:20.156 "raid_level": "raid0", 00:07:20.156 "base_bdevs": [ 00:07:20.156 "malloc1", 00:07:20.156 "malloc2" 00:07:20.156 ], 00:07:20.156 "strip_size_kb": 64, 00:07:20.156 "superblock": false, 00:07:20.156 "method": "bdev_raid_create", 00:07:20.156 "req_id": 1 00:07:20.156 } 00:07:20.156 Got JSON-RPC error response 00:07:20.156 response: 00:07:20.156 { 00:07:20.156 "code": -17, 00:07:20.156 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:20.156 } 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.156 14:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.156 [2024-12-09 14:39:58.179888] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:20.156 [2024-12-09 14:39:58.179955] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:20.157 [2024-12-09 14:39:58.179973] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:20.157 [2024-12-09 14:39:58.179984] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:20.157 [2024-12-09 14:39:58.182396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:20.157 [2024-12-09 14:39:58.182439] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:20.157 [2024-12-09 14:39:58.182533] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:20.157 [2024-12-09 14:39:58.182608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:20.157 pt1 00:07:20.157 14:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.157 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:20.157 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:20.157 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:20.157 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:20.157 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:20.157 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:20.157 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.157 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.157 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.157 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.157 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.157 14:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.157 14:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.157 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:20.157 14:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.157 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.157 "name": "raid_bdev1", 00:07:20.157 "uuid": "518a10ca-6399-492f-b814-16ac77a4b1cc", 00:07:20.157 "strip_size_kb": 64, 00:07:20.157 "state": "configuring", 00:07:20.157 "raid_level": "raid0", 00:07:20.157 "superblock": true, 00:07:20.157 "num_base_bdevs": 2, 00:07:20.157 "num_base_bdevs_discovered": 1, 00:07:20.157 "num_base_bdevs_operational": 2, 00:07:20.157 "base_bdevs_list": [ 00:07:20.157 { 00:07:20.157 "name": "pt1", 00:07:20.157 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:20.157 "is_configured": true, 00:07:20.157 "data_offset": 2048, 00:07:20.157 "data_size": 63488 00:07:20.157 }, 00:07:20.157 { 00:07:20.157 "name": null, 00:07:20.157 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:20.157 "is_configured": false, 00:07:20.157 "data_offset": 2048, 00:07:20.157 "data_size": 63488 00:07:20.157 } 00:07:20.157 ] 00:07:20.157 }' 00:07:20.157 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.157 14:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.726 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:20.726 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:20.726 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:20.726 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:20.727 14:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.727 14:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.727 [2024-12-09 14:39:58.611197] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:20.727 [2024-12-09 14:39:58.611277] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:20.727 [2024-12-09 14:39:58.611299] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:20.727 [2024-12-09 14:39:58.611310] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:20.727 [2024-12-09 14:39:58.611780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:20.727 [2024-12-09 14:39:58.611810] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:20.727 [2024-12-09 14:39:58.611891] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:20.727 [2024-12-09 14:39:58.611923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:20.727 [2024-12-09 14:39:58.612044] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:20.727 [2024-12-09 14:39:58.612059] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:20.727 [2024-12-09 14:39:58.612297] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:20.727 [2024-12-09 14:39:58.612452] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:20.727 [2024-12-09 14:39:58.612464] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:20.727 [2024-12-09 14:39:58.612622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:20.727 pt2 00:07:20.727 14:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.727 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:20.727 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:20.727 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:20.727 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:20.727 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:20.727 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:20.727 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:20.727 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:20.727 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.727 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.727 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.727 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.727 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.727 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:20.727 14:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.727 14:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.727 14:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.727 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.727 "name": "raid_bdev1", 00:07:20.727 "uuid": "518a10ca-6399-492f-b814-16ac77a4b1cc", 00:07:20.727 "strip_size_kb": 64, 00:07:20.727 "state": "online", 00:07:20.727 "raid_level": "raid0", 00:07:20.727 "superblock": true, 00:07:20.727 "num_base_bdevs": 2, 00:07:20.727 "num_base_bdevs_discovered": 2, 00:07:20.727 "num_base_bdevs_operational": 2, 00:07:20.727 "base_bdevs_list": [ 00:07:20.727 { 00:07:20.727 "name": "pt1", 00:07:20.727 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:20.727 "is_configured": true, 00:07:20.727 "data_offset": 2048, 00:07:20.727 "data_size": 63488 00:07:20.727 }, 00:07:20.727 { 00:07:20.727 "name": "pt2", 00:07:20.727 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:20.727 "is_configured": true, 00:07:20.727 "data_offset": 2048, 00:07:20.727 "data_size": 63488 00:07:20.727 } 00:07:20.727 ] 00:07:20.727 }' 00:07:20.727 14:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.727 14:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.987 14:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:20.987 14:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:20.987 14:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:20.987 14:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:20.987 14:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:20.987 14:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:20.987 14:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:20.987 14:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.987 14:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.987 14:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:20.987 [2024-12-09 14:39:59.018777] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:20.987 14:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.987 14:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:20.987 "name": "raid_bdev1", 00:07:20.987 "aliases": [ 00:07:20.987 "518a10ca-6399-492f-b814-16ac77a4b1cc" 00:07:20.987 ], 00:07:20.987 "product_name": "Raid Volume", 00:07:20.987 "block_size": 512, 00:07:20.987 "num_blocks": 126976, 00:07:20.987 "uuid": "518a10ca-6399-492f-b814-16ac77a4b1cc", 00:07:20.987 "assigned_rate_limits": { 00:07:20.987 "rw_ios_per_sec": 0, 00:07:20.987 "rw_mbytes_per_sec": 0, 00:07:20.987 "r_mbytes_per_sec": 0, 00:07:20.987 "w_mbytes_per_sec": 0 00:07:20.987 }, 00:07:20.987 "claimed": false, 00:07:20.987 "zoned": false, 00:07:20.987 "supported_io_types": { 00:07:20.987 "read": true, 00:07:20.987 "write": true, 00:07:20.987 "unmap": true, 00:07:20.987 "flush": true, 00:07:20.987 "reset": true, 00:07:20.987 "nvme_admin": false, 00:07:20.987 "nvme_io": false, 00:07:20.987 "nvme_io_md": false, 00:07:20.987 "write_zeroes": true, 00:07:20.987 "zcopy": false, 00:07:20.987 "get_zone_info": false, 00:07:20.987 "zone_management": false, 00:07:20.987 "zone_append": false, 00:07:20.987 "compare": false, 00:07:20.987 "compare_and_write": false, 00:07:20.987 "abort": false, 00:07:20.987 "seek_hole": false, 00:07:20.987 "seek_data": false, 00:07:20.987 "copy": false, 00:07:20.987 "nvme_iov_md": false 00:07:20.987 }, 00:07:20.987 "memory_domains": [ 00:07:20.987 { 00:07:20.987 "dma_device_id": "system", 00:07:20.987 "dma_device_type": 1 00:07:20.987 }, 00:07:20.987 { 00:07:20.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.987 "dma_device_type": 2 00:07:20.987 }, 00:07:20.987 { 00:07:20.987 "dma_device_id": "system", 00:07:20.987 "dma_device_type": 1 00:07:20.987 }, 00:07:20.987 { 00:07:20.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.987 "dma_device_type": 2 00:07:20.987 } 00:07:20.987 ], 00:07:20.987 "driver_specific": { 00:07:20.987 "raid": { 00:07:20.987 "uuid": "518a10ca-6399-492f-b814-16ac77a4b1cc", 00:07:20.987 "strip_size_kb": 64, 00:07:20.987 "state": "online", 00:07:20.987 "raid_level": "raid0", 00:07:20.987 "superblock": true, 00:07:20.987 "num_base_bdevs": 2, 00:07:20.987 "num_base_bdevs_discovered": 2, 00:07:20.987 "num_base_bdevs_operational": 2, 00:07:20.987 "base_bdevs_list": [ 00:07:20.987 { 00:07:20.987 "name": "pt1", 00:07:20.987 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:20.987 "is_configured": true, 00:07:20.987 "data_offset": 2048, 00:07:20.987 "data_size": 63488 00:07:20.987 }, 00:07:20.987 { 00:07:20.987 "name": "pt2", 00:07:20.987 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:20.987 "is_configured": true, 00:07:20.987 "data_offset": 2048, 00:07:20.987 "data_size": 63488 00:07:20.987 } 00:07:20.987 ] 00:07:20.987 } 00:07:20.987 } 00:07:20.987 }' 00:07:20.987 14:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:20.987 14:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:20.987 pt2' 00:07:20.987 14:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.247 14:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:21.247 14:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:21.247 14:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:21.247 14:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.247 14:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.247 14:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.247 14:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.247 14:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:21.247 14:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:21.247 14:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:21.247 14:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:21.247 14:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.247 14:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.247 14:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.247 14:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.247 14:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:21.247 14:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:21.247 14:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:21.247 14:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.247 14:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.247 14:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:21.247 [2024-12-09 14:39:59.226411] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:21.247 14:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.247 14:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 518a10ca-6399-492f-b814-16ac77a4b1cc '!=' 518a10ca-6399-492f-b814-16ac77a4b1cc ']' 00:07:21.247 14:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:21.247 14:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:21.247 14:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:21.247 14:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62500 00:07:21.247 14:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62500 ']' 00:07:21.247 14:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62500 00:07:21.247 14:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:21.247 14:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.247 14:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62500 00:07:21.247 14:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:21.247 14:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:21.247 killing process with pid 62500 00:07:21.247 14:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62500' 00:07:21.247 14:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62500 00:07:21.247 [2024-12-09 14:39:59.295265] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:21.247 [2024-12-09 14:39:59.295377] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:21.247 [2024-12-09 14:39:59.295427] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:21.247 [2024-12-09 14:39:59.295439] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:21.247 14:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62500 00:07:21.507 [2024-12-09 14:39:59.507239] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:22.899 14:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:22.899 00:07:22.899 real 0m4.409s 00:07:22.899 user 0m6.208s 00:07:22.899 sys 0m0.649s 00:07:22.899 14:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.899 14:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.899 ************************************ 00:07:22.899 END TEST raid_superblock_test 00:07:22.899 ************************************ 00:07:22.899 14:40:00 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:22.899 14:40:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:22.899 14:40:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.899 14:40:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:22.899 ************************************ 00:07:22.899 START TEST raid_read_error_test 00:07:22.899 ************************************ 00:07:22.899 14:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:07:22.899 14:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:22.899 14:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:22.899 14:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:22.899 14:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:22.899 14:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:22.899 14:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:22.899 14:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:22.899 14:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:22.899 14:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:22.899 14:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:22.899 14:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:22.899 14:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:22.899 14:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:22.899 14:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:22.899 14:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:22.899 14:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:22.899 14:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:22.899 14:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:22.899 14:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:22.899 14:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:22.899 14:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:22.899 14:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:22.899 14:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.HEoc5hu9Mj 00:07:22.899 14:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:22.899 14:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62706 00:07:22.899 14:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62706 00:07:22.899 14:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62706 ']' 00:07:22.899 14:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.899 14:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.899 14:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.899 14:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.899 14:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.899 [2024-12-09 14:40:00.801680] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:07:22.899 [2024-12-09 14:40:00.801959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62706 ] 00:07:22.899 [2024-12-09 14:40:00.985595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.159 [2024-12-09 14:40:01.099657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.418 [2024-12-09 14:40:01.293291] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:23.418 [2024-12-09 14:40:01.293352] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.678 BaseBdev1_malloc 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.678 true 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.678 [2024-12-09 14:40:01.705537] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:23.678 [2024-12-09 14:40:01.705632] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:23.678 [2024-12-09 14:40:01.705663] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:23.678 [2024-12-09 14:40:01.705675] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:23.678 [2024-12-09 14:40:01.708024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:23.678 [2024-12-09 14:40:01.708076] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:23.678 BaseBdev1 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.678 BaseBdev2_malloc 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.678 true 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.678 [2024-12-09 14:40:01.773675] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:23.678 [2024-12-09 14:40:01.773748] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:23.678 [2024-12-09 14:40:01.773765] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:23.678 [2024-12-09 14:40:01.773776] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:23.678 [2024-12-09 14:40:01.776023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:23.678 [2024-12-09 14:40:01.776063] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:23.678 BaseBdev2 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.678 [2024-12-09 14:40:01.785711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:23.678 [2024-12-09 14:40:01.787701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:23.678 [2024-12-09 14:40:01.787907] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:23.678 [2024-12-09 14:40:01.787932] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:23.678 [2024-12-09 14:40:01.788170] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:23.678 [2024-12-09 14:40:01.788350] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:23.678 [2024-12-09 14:40:01.788367] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:23.678 [2024-12-09 14:40:01.788525] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.678 14:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.679 14:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.679 14:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.679 14:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:23.937 14:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.937 14:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.937 "name": "raid_bdev1", 00:07:23.937 "uuid": "64bdba98-1d2e-4882-95e0-f6329baebc34", 00:07:23.937 "strip_size_kb": 64, 00:07:23.937 "state": "online", 00:07:23.937 "raid_level": "raid0", 00:07:23.937 "superblock": true, 00:07:23.937 "num_base_bdevs": 2, 00:07:23.937 "num_base_bdevs_discovered": 2, 00:07:23.937 "num_base_bdevs_operational": 2, 00:07:23.937 "base_bdevs_list": [ 00:07:23.937 { 00:07:23.937 "name": "BaseBdev1", 00:07:23.937 "uuid": "18f6863d-c90d-5cda-a2a1-327bb8b1b0eb", 00:07:23.937 "is_configured": true, 00:07:23.937 "data_offset": 2048, 00:07:23.937 "data_size": 63488 00:07:23.937 }, 00:07:23.937 { 00:07:23.937 "name": "BaseBdev2", 00:07:23.937 "uuid": "73bc7986-2cf8-5b4f-96bc-da955ef17536", 00:07:23.937 "is_configured": true, 00:07:23.937 "data_offset": 2048, 00:07:23.937 "data_size": 63488 00:07:23.937 } 00:07:23.937 ] 00:07:23.937 }' 00:07:23.938 14:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.938 14:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.197 14:40:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:24.197 14:40:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:24.197 [2024-12-09 14:40:02.294191] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:25.136 14:40:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:25.136 14:40:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.136 14:40:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.136 14:40:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.136 14:40:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:25.136 14:40:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:25.136 14:40:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:25.136 14:40:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:25.136 14:40:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:25.136 14:40:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:25.136 14:40:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:25.136 14:40:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.136 14:40:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:25.136 14:40:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.136 14:40:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.136 14:40:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.136 14:40:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.136 14:40:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.136 14:40:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.136 14:40:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.136 14:40:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:25.136 14:40:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.395 14:40:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.395 "name": "raid_bdev1", 00:07:25.395 "uuid": "64bdba98-1d2e-4882-95e0-f6329baebc34", 00:07:25.395 "strip_size_kb": 64, 00:07:25.395 "state": "online", 00:07:25.395 "raid_level": "raid0", 00:07:25.395 "superblock": true, 00:07:25.395 "num_base_bdevs": 2, 00:07:25.395 "num_base_bdevs_discovered": 2, 00:07:25.395 "num_base_bdevs_operational": 2, 00:07:25.395 "base_bdevs_list": [ 00:07:25.395 { 00:07:25.395 "name": "BaseBdev1", 00:07:25.395 "uuid": "18f6863d-c90d-5cda-a2a1-327bb8b1b0eb", 00:07:25.395 "is_configured": true, 00:07:25.395 "data_offset": 2048, 00:07:25.395 "data_size": 63488 00:07:25.395 }, 00:07:25.395 { 00:07:25.395 "name": "BaseBdev2", 00:07:25.395 "uuid": "73bc7986-2cf8-5b4f-96bc-da955ef17536", 00:07:25.395 "is_configured": true, 00:07:25.395 "data_offset": 2048, 00:07:25.395 "data_size": 63488 00:07:25.395 } 00:07:25.395 ] 00:07:25.395 }' 00:07:25.395 14:40:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.395 14:40:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.655 14:40:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:25.655 14:40:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.655 14:40:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.655 [2024-12-09 14:40:03.670886] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:25.655 [2024-12-09 14:40:03.670995] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:25.655 [2024-12-09 14:40:03.673918] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:25.655 [2024-12-09 14:40:03.674017] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:25.655 [2024-12-09 14:40:03.674069] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:25.655 [2024-12-09 14:40:03.674113] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:25.655 { 00:07:25.655 "results": [ 00:07:25.655 { 00:07:25.655 "job": "raid_bdev1", 00:07:25.655 "core_mask": "0x1", 00:07:25.655 "workload": "randrw", 00:07:25.655 "percentage": 50, 00:07:25.655 "status": "finished", 00:07:25.655 "queue_depth": 1, 00:07:25.655 "io_size": 131072, 00:07:25.655 "runtime": 1.377695, 00:07:25.655 "iops": 14964.850710788673, 00:07:25.655 "mibps": 1870.606338848584, 00:07:25.655 "io_failed": 1, 00:07:25.655 "io_timeout": 0, 00:07:25.655 "avg_latency_us": 92.67609554715621, 00:07:25.655 "min_latency_us": 27.165065502183406, 00:07:25.655 "max_latency_us": 1652.709170305677 00:07:25.655 } 00:07:25.655 ], 00:07:25.655 "core_count": 1 00:07:25.655 } 00:07:25.655 14:40:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.655 14:40:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62706 00:07:25.655 14:40:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62706 ']' 00:07:25.655 14:40:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62706 00:07:25.655 14:40:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:25.655 14:40:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:25.655 14:40:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62706 00:07:25.655 killing process with pid 62706 00:07:25.655 14:40:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:25.655 14:40:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:25.655 14:40:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62706' 00:07:25.655 14:40:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62706 00:07:25.655 [2024-12-09 14:40:03.711993] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:25.655 14:40:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62706 00:07:25.914 [2024-12-09 14:40:03.851859] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:27.293 14:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.HEoc5hu9Mj 00:07:27.293 14:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:27.293 14:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:27.293 14:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:27.293 14:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:27.293 14:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:27.293 14:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:27.293 14:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:27.293 00:07:27.293 real 0m4.371s 00:07:27.293 user 0m5.258s 00:07:27.293 sys 0m0.513s 00:07:27.293 ************************************ 00:07:27.293 END TEST raid_read_error_test 00:07:27.293 ************************************ 00:07:27.293 14:40:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.293 14:40:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.293 14:40:05 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:27.293 14:40:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:27.293 14:40:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.293 14:40:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:27.293 ************************************ 00:07:27.293 START TEST raid_write_error_test 00:07:27.293 ************************************ 00:07:27.293 14:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:07:27.293 14:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:27.293 14:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:27.293 14:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:27.293 14:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:27.293 14:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:27.293 14:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:27.293 14:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:27.293 14:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:27.293 14:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:27.293 14:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:27.293 14:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:27.293 14:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:27.293 14:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:27.293 14:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:27.293 14:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:27.293 14:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:27.293 14:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:27.293 14:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:27.293 14:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:27.293 14:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:27.293 14:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:27.293 14:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:27.293 14:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.IkkGqIXcWl 00:07:27.293 14:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62852 00:07:27.293 14:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:27.293 14:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62852 00:07:27.293 14:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62852 ']' 00:07:27.293 14:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.293 14:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.293 14:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.293 14:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.293 14:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.293 [2024-12-09 14:40:05.219343] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:07:27.293 [2024-12-09 14:40:05.219450] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62852 ] 00:07:27.293 [2024-12-09 14:40:05.392383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.552 [2024-12-09 14:40:05.510955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.812 [2024-12-09 14:40:05.725891] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.812 [2024-12-09 14:40:05.725968] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:28.075 14:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.075 14:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:28.075 14:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:28.075 14:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:28.075 14:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.075 14:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.075 BaseBdev1_malloc 00:07:28.075 14:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.075 14:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:28.075 14:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.075 14:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.075 true 00:07:28.075 14:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.075 14:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:28.075 14:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.075 14:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.075 [2024-12-09 14:40:06.116822] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:28.075 [2024-12-09 14:40:06.116947] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:28.075 [2024-12-09 14:40:06.116974] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:28.075 [2024-12-09 14:40:06.116985] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:28.075 [2024-12-09 14:40:06.119404] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:28.075 [2024-12-09 14:40:06.119464] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:28.075 BaseBdev1 00:07:28.075 14:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.075 14:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:28.075 14:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:28.075 14:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.075 14:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.075 BaseBdev2_malloc 00:07:28.075 14:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.075 14:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:28.075 14:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.075 14:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.075 true 00:07:28.075 14:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.075 14:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:28.075 14:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.075 14:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.075 [2024-12-09 14:40:06.185734] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:28.075 [2024-12-09 14:40:06.185796] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:28.075 [2024-12-09 14:40:06.185815] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:28.076 [2024-12-09 14:40:06.185825] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:28.076 [2024-12-09 14:40:06.188159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:28.076 [2024-12-09 14:40:06.188288] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:28.076 BaseBdev2 00:07:28.076 14:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.076 14:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:28.076 14:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.076 14:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.347 [2024-12-09 14:40:06.197762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:28.347 [2024-12-09 14:40:06.199842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:28.347 [2024-12-09 14:40:06.200100] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:28.347 [2024-12-09 14:40:06.200155] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:28.347 [2024-12-09 14:40:06.200448] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:28.347 [2024-12-09 14:40:06.200699] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:28.347 [2024-12-09 14:40:06.200753] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:28.347 [2024-12-09 14:40:06.200992] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:28.347 14:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.347 14:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:28.347 14:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:28.347 14:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:28.347 14:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:28.347 14:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.347 14:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:28.347 14:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.347 14:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.347 14:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.347 14:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.348 14:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.348 14:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:28.348 14:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.348 14:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.348 14:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.348 14:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.348 "name": "raid_bdev1", 00:07:28.348 "uuid": "8b26a3dc-b7b2-4d94-abe8-edc99db8b319", 00:07:28.348 "strip_size_kb": 64, 00:07:28.348 "state": "online", 00:07:28.348 "raid_level": "raid0", 00:07:28.348 "superblock": true, 00:07:28.348 "num_base_bdevs": 2, 00:07:28.348 "num_base_bdevs_discovered": 2, 00:07:28.348 "num_base_bdevs_operational": 2, 00:07:28.348 "base_bdevs_list": [ 00:07:28.348 { 00:07:28.348 "name": "BaseBdev1", 00:07:28.348 "uuid": "be1a3ab2-f4ed-5f01-baed-bf1cbe6d0c6c", 00:07:28.348 "is_configured": true, 00:07:28.348 "data_offset": 2048, 00:07:28.348 "data_size": 63488 00:07:28.348 }, 00:07:28.348 { 00:07:28.348 "name": "BaseBdev2", 00:07:28.348 "uuid": "8a323e0b-74da-5e3d-931a-2290fc359ece", 00:07:28.348 "is_configured": true, 00:07:28.348 "data_offset": 2048, 00:07:28.348 "data_size": 63488 00:07:28.348 } 00:07:28.348 ] 00:07:28.348 }' 00:07:28.348 14:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.348 14:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.607 14:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:28.607 14:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:28.607 [2024-12-09 14:40:06.722245] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:29.545 14:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:29.545 14:40:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.545 14:40:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.545 14:40:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.545 14:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:29.545 14:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:29.545 14:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:29.545 14:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:29.545 14:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:29.545 14:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:29.545 14:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:29.545 14:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.545 14:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.545 14:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.545 14:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.545 14:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.545 14:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.545 14:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.545 14:40:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.545 14:40:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.545 14:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:29.545 14:40:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.805 14:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.805 "name": "raid_bdev1", 00:07:29.805 "uuid": "8b26a3dc-b7b2-4d94-abe8-edc99db8b319", 00:07:29.805 "strip_size_kb": 64, 00:07:29.805 "state": "online", 00:07:29.805 "raid_level": "raid0", 00:07:29.805 "superblock": true, 00:07:29.805 "num_base_bdevs": 2, 00:07:29.805 "num_base_bdevs_discovered": 2, 00:07:29.805 "num_base_bdevs_operational": 2, 00:07:29.805 "base_bdevs_list": [ 00:07:29.805 { 00:07:29.805 "name": "BaseBdev1", 00:07:29.805 "uuid": "be1a3ab2-f4ed-5f01-baed-bf1cbe6d0c6c", 00:07:29.805 "is_configured": true, 00:07:29.805 "data_offset": 2048, 00:07:29.805 "data_size": 63488 00:07:29.805 }, 00:07:29.805 { 00:07:29.805 "name": "BaseBdev2", 00:07:29.805 "uuid": "8a323e0b-74da-5e3d-931a-2290fc359ece", 00:07:29.805 "is_configured": true, 00:07:29.805 "data_offset": 2048, 00:07:29.805 "data_size": 63488 00:07:29.805 } 00:07:29.805 ] 00:07:29.805 }' 00:07:29.805 14:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.805 14:40:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.065 14:40:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:30.065 14:40:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.065 14:40:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.065 [2024-12-09 14:40:08.078441] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:30.065 [2024-12-09 14:40:08.078553] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:30.065 [2024-12-09 14:40:08.081404] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:30.065 [2024-12-09 14:40:08.081507] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:30.065 [2024-12-09 14:40:08.081567] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:30.066 [2024-12-09 14:40:08.081659] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:30.066 { 00:07:30.066 "results": [ 00:07:30.066 { 00:07:30.066 "job": "raid_bdev1", 00:07:30.066 "core_mask": "0x1", 00:07:30.066 "workload": "randrw", 00:07:30.066 "percentage": 50, 00:07:30.066 "status": "finished", 00:07:30.066 "queue_depth": 1, 00:07:30.066 "io_size": 131072, 00:07:30.066 "runtime": 1.357101, 00:07:30.066 "iops": 14995.936190453032, 00:07:30.066 "mibps": 1874.492023806629, 00:07:30.066 "io_failed": 1, 00:07:30.066 "io_timeout": 0, 00:07:30.066 "avg_latency_us": 92.38923676910824, 00:07:30.066 "min_latency_us": 27.053275109170304, 00:07:30.066 "max_latency_us": 1566.8541484716156 00:07:30.066 } 00:07:30.066 ], 00:07:30.066 "core_count": 1 00:07:30.066 } 00:07:30.066 14:40:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.066 14:40:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62852 00:07:30.066 14:40:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62852 ']' 00:07:30.066 14:40:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62852 00:07:30.066 14:40:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:30.066 14:40:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:30.066 14:40:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62852 00:07:30.066 14:40:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:30.066 14:40:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:30.066 killing process with pid 62852 00:07:30.066 14:40:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62852' 00:07:30.066 14:40:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62852 00:07:30.066 [2024-12-09 14:40:08.125178] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:30.066 14:40:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62852 00:07:30.326 [2024-12-09 14:40:08.265979] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:31.707 14:40:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.IkkGqIXcWl 00:07:31.707 14:40:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:31.707 14:40:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:31.707 14:40:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:31.707 14:40:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:31.707 14:40:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:31.707 14:40:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:31.707 ************************************ 00:07:31.707 END TEST raid_write_error_test 00:07:31.707 ************************************ 00:07:31.707 14:40:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:31.707 00:07:31.707 real 0m4.366s 00:07:31.707 user 0m5.225s 00:07:31.707 sys 0m0.523s 00:07:31.707 14:40:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.707 14:40:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.707 14:40:09 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:31.707 14:40:09 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:31.707 14:40:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:31.707 14:40:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.707 14:40:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:31.707 ************************************ 00:07:31.707 START TEST raid_state_function_test 00:07:31.707 ************************************ 00:07:31.707 14:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:07:31.707 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:31.707 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:31.707 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:31.707 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:31.707 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:31.707 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:31.707 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:31.707 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:31.707 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:31.707 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:31.707 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:31.707 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:31.707 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:31.707 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:31.707 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:31.707 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:31.707 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:31.707 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:31.707 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:31.707 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:31.707 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:31.707 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:31.707 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:31.707 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62995 00:07:31.707 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:31.707 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62995' 00:07:31.707 Process raid pid: 62995 00:07:31.707 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62995 00:07:31.707 14:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62995 ']' 00:07:31.707 14:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.707 14:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.707 14:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.707 14:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.707 14:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.707 [2024-12-09 14:40:09.644916] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:07:31.708 [2024-12-09 14:40:09.645136] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.708 [2024-12-09 14:40:09.804613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.967 [2024-12-09 14:40:09.927569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.227 [2024-12-09 14:40:10.138200] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.227 [2024-12-09 14:40:10.138305] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.487 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.487 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:32.487 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:32.487 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.487 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.487 [2024-12-09 14:40:10.487658] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:32.487 [2024-12-09 14:40:10.487721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:32.487 [2024-12-09 14:40:10.487732] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:32.487 [2024-12-09 14:40:10.487742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:32.487 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.487 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:32.487 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.487 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:32.487 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:32.487 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.487 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.487 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.487 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.487 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.487 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.487 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.487 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.487 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.487 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.487 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.487 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.487 "name": "Existed_Raid", 00:07:32.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.487 "strip_size_kb": 64, 00:07:32.487 "state": "configuring", 00:07:32.487 "raid_level": "concat", 00:07:32.487 "superblock": false, 00:07:32.487 "num_base_bdevs": 2, 00:07:32.487 "num_base_bdevs_discovered": 0, 00:07:32.487 "num_base_bdevs_operational": 2, 00:07:32.487 "base_bdevs_list": [ 00:07:32.487 { 00:07:32.487 "name": "BaseBdev1", 00:07:32.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.487 "is_configured": false, 00:07:32.487 "data_offset": 0, 00:07:32.487 "data_size": 0 00:07:32.487 }, 00:07:32.487 { 00:07:32.487 "name": "BaseBdev2", 00:07:32.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.487 "is_configured": false, 00:07:32.487 "data_offset": 0, 00:07:32.487 "data_size": 0 00:07:32.487 } 00:07:32.487 ] 00:07:32.487 }' 00:07:32.487 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.487 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.056 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:33.056 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.056 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.056 [2024-12-09 14:40:10.918892] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:33.056 [2024-12-09 14:40:10.918997] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:33.056 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.056 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:33.056 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.056 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.056 [2024-12-09 14:40:10.930855] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:33.056 [2024-12-09 14:40:10.930946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:33.056 [2024-12-09 14:40:10.930999] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:33.056 [2024-12-09 14:40:10.931042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:33.056 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.056 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:33.056 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.056 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.056 [2024-12-09 14:40:10.979199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:33.056 BaseBdev1 00:07:33.056 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.056 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:33.056 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:33.056 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:33.056 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:33.056 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:33.056 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:33.056 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:33.056 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.056 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.056 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.056 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:33.056 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.056 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.056 [ 00:07:33.056 { 00:07:33.056 "name": "BaseBdev1", 00:07:33.056 "aliases": [ 00:07:33.056 "376ae952-9a10-476e-9190-fecd8f61e14e" 00:07:33.056 ], 00:07:33.056 "product_name": "Malloc disk", 00:07:33.056 "block_size": 512, 00:07:33.056 "num_blocks": 65536, 00:07:33.056 "uuid": "376ae952-9a10-476e-9190-fecd8f61e14e", 00:07:33.056 "assigned_rate_limits": { 00:07:33.056 "rw_ios_per_sec": 0, 00:07:33.056 "rw_mbytes_per_sec": 0, 00:07:33.056 "r_mbytes_per_sec": 0, 00:07:33.056 "w_mbytes_per_sec": 0 00:07:33.056 }, 00:07:33.056 "claimed": true, 00:07:33.056 "claim_type": "exclusive_write", 00:07:33.056 "zoned": false, 00:07:33.056 "supported_io_types": { 00:07:33.056 "read": true, 00:07:33.056 "write": true, 00:07:33.056 "unmap": true, 00:07:33.056 "flush": true, 00:07:33.056 "reset": true, 00:07:33.056 "nvme_admin": false, 00:07:33.056 "nvme_io": false, 00:07:33.056 "nvme_io_md": false, 00:07:33.056 "write_zeroes": true, 00:07:33.056 "zcopy": true, 00:07:33.056 "get_zone_info": false, 00:07:33.056 "zone_management": false, 00:07:33.056 "zone_append": false, 00:07:33.056 "compare": false, 00:07:33.056 "compare_and_write": false, 00:07:33.056 "abort": true, 00:07:33.056 "seek_hole": false, 00:07:33.056 "seek_data": false, 00:07:33.056 "copy": true, 00:07:33.056 "nvme_iov_md": false 00:07:33.056 }, 00:07:33.056 "memory_domains": [ 00:07:33.056 { 00:07:33.056 "dma_device_id": "system", 00:07:33.056 "dma_device_type": 1 00:07:33.056 }, 00:07:33.056 { 00:07:33.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.056 "dma_device_type": 2 00:07:33.056 } 00:07:33.056 ], 00:07:33.056 "driver_specific": {} 00:07:33.056 } 00:07:33.056 ] 00:07:33.056 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.056 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:33.056 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:33.056 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.056 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:33.056 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:33.056 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.056 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.056 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.056 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.056 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.056 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.056 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.056 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.056 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.056 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.056 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.056 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.056 "name": "Existed_Raid", 00:07:33.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.056 "strip_size_kb": 64, 00:07:33.056 "state": "configuring", 00:07:33.056 "raid_level": "concat", 00:07:33.056 "superblock": false, 00:07:33.056 "num_base_bdevs": 2, 00:07:33.056 "num_base_bdevs_discovered": 1, 00:07:33.056 "num_base_bdevs_operational": 2, 00:07:33.056 "base_bdevs_list": [ 00:07:33.056 { 00:07:33.056 "name": "BaseBdev1", 00:07:33.056 "uuid": "376ae952-9a10-476e-9190-fecd8f61e14e", 00:07:33.056 "is_configured": true, 00:07:33.056 "data_offset": 0, 00:07:33.056 "data_size": 65536 00:07:33.056 }, 00:07:33.056 { 00:07:33.056 "name": "BaseBdev2", 00:07:33.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.056 "is_configured": false, 00:07:33.056 "data_offset": 0, 00:07:33.056 "data_size": 0 00:07:33.056 } 00:07:33.056 ] 00:07:33.056 }' 00:07:33.056 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.056 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.625 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:33.625 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.625 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.625 [2024-12-09 14:40:11.478421] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:33.625 [2024-12-09 14:40:11.478484] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:33.625 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.625 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:33.625 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.625 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.625 [2024-12-09 14:40:11.490444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:33.625 [2024-12-09 14:40:11.492395] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:33.625 [2024-12-09 14:40:11.492480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:33.625 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.625 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:33.625 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:33.625 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:33.625 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.625 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:33.625 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:33.625 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.625 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.625 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.625 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.625 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.625 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.625 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.625 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.625 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.625 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.625 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.625 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.625 "name": "Existed_Raid", 00:07:33.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.625 "strip_size_kb": 64, 00:07:33.625 "state": "configuring", 00:07:33.625 "raid_level": "concat", 00:07:33.625 "superblock": false, 00:07:33.625 "num_base_bdevs": 2, 00:07:33.625 "num_base_bdevs_discovered": 1, 00:07:33.625 "num_base_bdevs_operational": 2, 00:07:33.625 "base_bdevs_list": [ 00:07:33.625 { 00:07:33.625 "name": "BaseBdev1", 00:07:33.625 "uuid": "376ae952-9a10-476e-9190-fecd8f61e14e", 00:07:33.625 "is_configured": true, 00:07:33.625 "data_offset": 0, 00:07:33.625 "data_size": 65536 00:07:33.625 }, 00:07:33.625 { 00:07:33.625 "name": "BaseBdev2", 00:07:33.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.625 "is_configured": false, 00:07:33.625 "data_offset": 0, 00:07:33.625 "data_size": 0 00:07:33.625 } 00:07:33.625 ] 00:07:33.625 }' 00:07:33.625 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.625 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.885 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:33.885 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.885 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.885 [2024-12-09 14:40:11.942138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:33.885 [2024-12-09 14:40:11.942279] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:33.885 [2024-12-09 14:40:11.942293] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:33.885 [2024-12-09 14:40:11.942636] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:33.885 [2024-12-09 14:40:11.942827] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:33.885 [2024-12-09 14:40:11.942842] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:33.885 [2024-12-09 14:40:11.943140] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.885 BaseBdev2 00:07:33.885 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.885 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:33.885 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:33.885 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:33.885 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:33.885 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:33.885 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:33.885 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:33.885 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.885 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.885 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.885 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:33.885 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.885 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.885 [ 00:07:33.885 { 00:07:33.885 "name": "BaseBdev2", 00:07:33.885 "aliases": [ 00:07:33.885 "f899dbce-b4f5-4918-b88e-b43edab5d6fc" 00:07:33.885 ], 00:07:33.885 "product_name": "Malloc disk", 00:07:33.885 "block_size": 512, 00:07:33.885 "num_blocks": 65536, 00:07:33.885 "uuid": "f899dbce-b4f5-4918-b88e-b43edab5d6fc", 00:07:33.885 "assigned_rate_limits": { 00:07:33.885 "rw_ios_per_sec": 0, 00:07:33.885 "rw_mbytes_per_sec": 0, 00:07:33.885 "r_mbytes_per_sec": 0, 00:07:33.885 "w_mbytes_per_sec": 0 00:07:33.885 }, 00:07:33.885 "claimed": true, 00:07:33.885 "claim_type": "exclusive_write", 00:07:33.885 "zoned": false, 00:07:33.885 "supported_io_types": { 00:07:33.885 "read": true, 00:07:33.885 "write": true, 00:07:33.885 "unmap": true, 00:07:33.885 "flush": true, 00:07:33.885 "reset": true, 00:07:33.885 "nvme_admin": false, 00:07:33.885 "nvme_io": false, 00:07:33.885 "nvme_io_md": false, 00:07:33.885 "write_zeroes": true, 00:07:33.885 "zcopy": true, 00:07:33.885 "get_zone_info": false, 00:07:33.885 "zone_management": false, 00:07:33.885 "zone_append": false, 00:07:33.885 "compare": false, 00:07:33.885 "compare_and_write": false, 00:07:33.885 "abort": true, 00:07:33.886 "seek_hole": false, 00:07:33.886 "seek_data": false, 00:07:33.886 "copy": true, 00:07:33.886 "nvme_iov_md": false 00:07:33.886 }, 00:07:33.886 "memory_domains": [ 00:07:33.886 { 00:07:33.886 "dma_device_id": "system", 00:07:33.886 "dma_device_type": 1 00:07:33.886 }, 00:07:33.886 { 00:07:33.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.886 "dma_device_type": 2 00:07:33.886 } 00:07:33.886 ], 00:07:33.886 "driver_specific": {} 00:07:33.886 } 00:07:33.886 ] 00:07:33.886 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.886 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:33.886 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:33.886 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:33.886 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:33.886 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.886 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:33.886 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:33.886 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.886 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.886 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.886 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.886 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.886 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.886 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.886 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.886 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.886 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.145 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.145 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.145 "name": "Existed_Raid", 00:07:34.145 "uuid": "2894f4f3-e3e1-46bf-9853-be475a7f42dd", 00:07:34.145 "strip_size_kb": 64, 00:07:34.145 "state": "online", 00:07:34.145 "raid_level": "concat", 00:07:34.145 "superblock": false, 00:07:34.145 "num_base_bdevs": 2, 00:07:34.145 "num_base_bdevs_discovered": 2, 00:07:34.145 "num_base_bdevs_operational": 2, 00:07:34.145 "base_bdevs_list": [ 00:07:34.145 { 00:07:34.145 "name": "BaseBdev1", 00:07:34.145 "uuid": "376ae952-9a10-476e-9190-fecd8f61e14e", 00:07:34.145 "is_configured": true, 00:07:34.145 "data_offset": 0, 00:07:34.145 "data_size": 65536 00:07:34.145 }, 00:07:34.145 { 00:07:34.145 "name": "BaseBdev2", 00:07:34.145 "uuid": "f899dbce-b4f5-4918-b88e-b43edab5d6fc", 00:07:34.145 "is_configured": true, 00:07:34.145 "data_offset": 0, 00:07:34.145 "data_size": 65536 00:07:34.145 } 00:07:34.145 ] 00:07:34.145 }' 00:07:34.145 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.146 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.405 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:34.405 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:34.405 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:34.405 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:34.405 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:34.405 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:34.405 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:34.405 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.405 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.405 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:34.405 [2024-12-09 14:40:12.421796] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:34.405 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.405 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:34.405 "name": "Existed_Raid", 00:07:34.405 "aliases": [ 00:07:34.405 "2894f4f3-e3e1-46bf-9853-be475a7f42dd" 00:07:34.405 ], 00:07:34.405 "product_name": "Raid Volume", 00:07:34.405 "block_size": 512, 00:07:34.405 "num_blocks": 131072, 00:07:34.405 "uuid": "2894f4f3-e3e1-46bf-9853-be475a7f42dd", 00:07:34.405 "assigned_rate_limits": { 00:07:34.405 "rw_ios_per_sec": 0, 00:07:34.405 "rw_mbytes_per_sec": 0, 00:07:34.405 "r_mbytes_per_sec": 0, 00:07:34.405 "w_mbytes_per_sec": 0 00:07:34.405 }, 00:07:34.405 "claimed": false, 00:07:34.405 "zoned": false, 00:07:34.405 "supported_io_types": { 00:07:34.405 "read": true, 00:07:34.405 "write": true, 00:07:34.405 "unmap": true, 00:07:34.405 "flush": true, 00:07:34.405 "reset": true, 00:07:34.405 "nvme_admin": false, 00:07:34.405 "nvme_io": false, 00:07:34.405 "nvme_io_md": false, 00:07:34.405 "write_zeroes": true, 00:07:34.405 "zcopy": false, 00:07:34.405 "get_zone_info": false, 00:07:34.405 "zone_management": false, 00:07:34.405 "zone_append": false, 00:07:34.405 "compare": false, 00:07:34.405 "compare_and_write": false, 00:07:34.405 "abort": false, 00:07:34.405 "seek_hole": false, 00:07:34.405 "seek_data": false, 00:07:34.405 "copy": false, 00:07:34.405 "nvme_iov_md": false 00:07:34.405 }, 00:07:34.405 "memory_domains": [ 00:07:34.405 { 00:07:34.406 "dma_device_id": "system", 00:07:34.406 "dma_device_type": 1 00:07:34.406 }, 00:07:34.406 { 00:07:34.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.406 "dma_device_type": 2 00:07:34.406 }, 00:07:34.406 { 00:07:34.406 "dma_device_id": "system", 00:07:34.406 "dma_device_type": 1 00:07:34.406 }, 00:07:34.406 { 00:07:34.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.406 "dma_device_type": 2 00:07:34.406 } 00:07:34.406 ], 00:07:34.406 "driver_specific": { 00:07:34.406 "raid": { 00:07:34.406 "uuid": "2894f4f3-e3e1-46bf-9853-be475a7f42dd", 00:07:34.406 "strip_size_kb": 64, 00:07:34.406 "state": "online", 00:07:34.406 "raid_level": "concat", 00:07:34.406 "superblock": false, 00:07:34.406 "num_base_bdevs": 2, 00:07:34.406 "num_base_bdevs_discovered": 2, 00:07:34.406 "num_base_bdevs_operational": 2, 00:07:34.406 "base_bdevs_list": [ 00:07:34.406 { 00:07:34.406 "name": "BaseBdev1", 00:07:34.406 "uuid": "376ae952-9a10-476e-9190-fecd8f61e14e", 00:07:34.406 "is_configured": true, 00:07:34.406 "data_offset": 0, 00:07:34.406 "data_size": 65536 00:07:34.406 }, 00:07:34.406 { 00:07:34.406 "name": "BaseBdev2", 00:07:34.406 "uuid": "f899dbce-b4f5-4918-b88e-b43edab5d6fc", 00:07:34.406 "is_configured": true, 00:07:34.406 "data_offset": 0, 00:07:34.406 "data_size": 65536 00:07:34.406 } 00:07:34.406 ] 00:07:34.406 } 00:07:34.406 } 00:07:34.406 }' 00:07:34.406 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:34.406 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:34.406 BaseBdev2' 00:07:34.406 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.719 [2024-12-09 14:40:12.645244] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:34.719 [2024-12-09 14:40:12.645281] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:34.719 [2024-12-09 14:40:12.645351] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.719 "name": "Existed_Raid", 00:07:34.719 "uuid": "2894f4f3-e3e1-46bf-9853-be475a7f42dd", 00:07:34.719 "strip_size_kb": 64, 00:07:34.719 "state": "offline", 00:07:34.719 "raid_level": "concat", 00:07:34.719 "superblock": false, 00:07:34.719 "num_base_bdevs": 2, 00:07:34.719 "num_base_bdevs_discovered": 1, 00:07:34.719 "num_base_bdevs_operational": 1, 00:07:34.719 "base_bdevs_list": [ 00:07:34.719 { 00:07:34.719 "name": null, 00:07:34.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:34.719 "is_configured": false, 00:07:34.719 "data_offset": 0, 00:07:34.719 "data_size": 65536 00:07:34.719 }, 00:07:34.719 { 00:07:34.719 "name": "BaseBdev2", 00:07:34.719 "uuid": "f899dbce-b4f5-4918-b88e-b43edab5d6fc", 00:07:34.719 "is_configured": true, 00:07:34.719 "data_offset": 0, 00:07:34.719 "data_size": 65536 00:07:34.719 } 00:07:34.719 ] 00:07:34.719 }' 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.719 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.304 14:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:35.304 14:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:35.304 14:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.304 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.304 14:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:35.304 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.304 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.304 14:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:35.304 14:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:35.304 14:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:35.304 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.304 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.304 [2024-12-09 14:40:13.252351] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:35.304 [2024-12-09 14:40:13.252411] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:35.304 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.304 14:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:35.304 14:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:35.304 14:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.304 14:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:35.304 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.304 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.304 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.304 14:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:35.304 14:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:35.304 14:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:35.304 14:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62995 00:07:35.304 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62995 ']' 00:07:35.304 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62995 00:07:35.304 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:35.304 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:35.304 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62995 00:07:35.564 killing process with pid 62995 00:07:35.564 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:35.564 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:35.564 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62995' 00:07:35.564 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62995 00:07:35.564 [2024-12-09 14:40:13.448578] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:35.564 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62995 00:07:35.564 [2024-12-09 14:40:13.465659] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:36.503 14:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:36.503 00:07:36.503 real 0m5.057s 00:07:36.503 user 0m7.276s 00:07:36.503 sys 0m0.816s 00:07:36.503 14:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.503 14:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.503 ************************************ 00:07:36.503 END TEST raid_state_function_test 00:07:36.503 ************************************ 00:07:36.762 14:40:14 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:36.762 14:40:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:36.763 14:40:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.763 14:40:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:36.763 ************************************ 00:07:36.763 START TEST raid_state_function_test_sb 00:07:36.763 ************************************ 00:07:36.763 14:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:07:36.763 14:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:36.763 14:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:36.763 14:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:36.763 14:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:36.763 14:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:36.763 14:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:36.763 14:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:36.763 14:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:36.763 14:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:36.763 14:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:36.763 14:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:36.763 14:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:36.763 14:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:36.763 14:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:36.763 14:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:36.763 14:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:36.763 14:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:36.763 14:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:36.763 14:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:36.763 14:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:36.763 14:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:36.763 14:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:36.763 14:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:36.763 14:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63243 00:07:36.763 14:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:36.763 14:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63243' 00:07:36.763 Process raid pid: 63243 00:07:36.763 14:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63243 00:07:36.763 14:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 63243 ']' 00:07:36.763 14:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.763 14:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.763 14:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.763 14:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.763 14:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.763 [2024-12-09 14:40:14.770184] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:07:36.763 [2024-12-09 14:40:14.770384] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.022 [2024-12-09 14:40:14.948928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.022 [2024-12-09 14:40:15.072863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.281 [2024-12-09 14:40:15.284183] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:37.281 [2024-12-09 14:40:15.284279] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:37.540 14:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.540 14:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:37.540 14:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:37.540 14:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.540 14:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.540 [2024-12-09 14:40:15.628927] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:37.540 [2024-12-09 14:40:15.628989] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:37.540 [2024-12-09 14:40:15.629005] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:37.540 [2024-12-09 14:40:15.629016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:37.540 14:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.540 14:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:37.540 14:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.540 14:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:37.540 14:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:37.540 14:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.540 14:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.540 14:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.540 14:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.540 14:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.540 14:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.540 14:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.541 14:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.541 14:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.541 14:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.541 14:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.800 14:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.800 "name": "Existed_Raid", 00:07:37.800 "uuid": "4f34dd88-f624-4b89-88a9-dc24d288ae30", 00:07:37.800 "strip_size_kb": 64, 00:07:37.800 "state": "configuring", 00:07:37.800 "raid_level": "concat", 00:07:37.800 "superblock": true, 00:07:37.800 "num_base_bdevs": 2, 00:07:37.800 "num_base_bdevs_discovered": 0, 00:07:37.800 "num_base_bdevs_operational": 2, 00:07:37.800 "base_bdevs_list": [ 00:07:37.800 { 00:07:37.800 "name": "BaseBdev1", 00:07:37.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.800 "is_configured": false, 00:07:37.800 "data_offset": 0, 00:07:37.800 "data_size": 0 00:07:37.800 }, 00:07:37.800 { 00:07:37.800 "name": "BaseBdev2", 00:07:37.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.800 "is_configured": false, 00:07:37.800 "data_offset": 0, 00:07:37.800 "data_size": 0 00:07:37.800 } 00:07:37.800 ] 00:07:37.800 }' 00:07:37.800 14:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.800 14:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.060 [2024-12-09 14:40:16.048163] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:38.060 [2024-12-09 14:40:16.048295] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.060 [2024-12-09 14:40:16.060150] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:38.060 [2024-12-09 14:40:16.060201] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:38.060 [2024-12-09 14:40:16.060212] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:38.060 [2024-12-09 14:40:16.060226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.060 [2024-12-09 14:40:16.109351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:38.060 BaseBdev1 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.060 [ 00:07:38.060 { 00:07:38.060 "name": "BaseBdev1", 00:07:38.060 "aliases": [ 00:07:38.060 "fa5d116b-9e7a-41a3-a01a-fd94bcc7d96e" 00:07:38.060 ], 00:07:38.060 "product_name": "Malloc disk", 00:07:38.060 "block_size": 512, 00:07:38.060 "num_blocks": 65536, 00:07:38.060 "uuid": "fa5d116b-9e7a-41a3-a01a-fd94bcc7d96e", 00:07:38.060 "assigned_rate_limits": { 00:07:38.060 "rw_ios_per_sec": 0, 00:07:38.060 "rw_mbytes_per_sec": 0, 00:07:38.060 "r_mbytes_per_sec": 0, 00:07:38.060 "w_mbytes_per_sec": 0 00:07:38.060 }, 00:07:38.060 "claimed": true, 00:07:38.060 "claim_type": "exclusive_write", 00:07:38.060 "zoned": false, 00:07:38.060 "supported_io_types": { 00:07:38.060 "read": true, 00:07:38.060 "write": true, 00:07:38.060 "unmap": true, 00:07:38.060 "flush": true, 00:07:38.060 "reset": true, 00:07:38.060 "nvme_admin": false, 00:07:38.060 "nvme_io": false, 00:07:38.060 "nvme_io_md": false, 00:07:38.060 "write_zeroes": true, 00:07:38.060 "zcopy": true, 00:07:38.060 "get_zone_info": false, 00:07:38.060 "zone_management": false, 00:07:38.060 "zone_append": false, 00:07:38.060 "compare": false, 00:07:38.060 "compare_and_write": false, 00:07:38.060 "abort": true, 00:07:38.060 "seek_hole": false, 00:07:38.060 "seek_data": false, 00:07:38.060 "copy": true, 00:07:38.060 "nvme_iov_md": false 00:07:38.060 }, 00:07:38.060 "memory_domains": [ 00:07:38.060 { 00:07:38.060 "dma_device_id": "system", 00:07:38.060 "dma_device_type": 1 00:07:38.060 }, 00:07:38.060 { 00:07:38.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.060 "dma_device_type": 2 00:07:38.060 } 00:07:38.060 ], 00:07:38.060 "driver_specific": {} 00:07:38.060 } 00:07:38.060 ] 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.060 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.320 14:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.320 "name": "Existed_Raid", 00:07:38.320 "uuid": "9a048c7d-4a98-4415-a802-440772be9b6c", 00:07:38.320 "strip_size_kb": 64, 00:07:38.320 "state": "configuring", 00:07:38.320 "raid_level": "concat", 00:07:38.320 "superblock": true, 00:07:38.320 "num_base_bdevs": 2, 00:07:38.320 "num_base_bdevs_discovered": 1, 00:07:38.320 "num_base_bdevs_operational": 2, 00:07:38.320 "base_bdevs_list": [ 00:07:38.320 { 00:07:38.320 "name": "BaseBdev1", 00:07:38.320 "uuid": "fa5d116b-9e7a-41a3-a01a-fd94bcc7d96e", 00:07:38.320 "is_configured": true, 00:07:38.320 "data_offset": 2048, 00:07:38.320 "data_size": 63488 00:07:38.320 }, 00:07:38.320 { 00:07:38.320 "name": "BaseBdev2", 00:07:38.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.320 "is_configured": false, 00:07:38.320 "data_offset": 0, 00:07:38.320 "data_size": 0 00:07:38.320 } 00:07:38.320 ] 00:07:38.320 }' 00:07:38.320 14:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.320 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.580 14:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:38.580 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.580 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.580 [2024-12-09 14:40:16.556669] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:38.580 [2024-12-09 14:40:16.556725] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:38.580 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.580 14:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:38.580 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.580 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.580 [2024-12-09 14:40:16.568733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:38.580 [2024-12-09 14:40:16.570792] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:38.580 [2024-12-09 14:40:16.570838] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:38.580 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.580 14:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:38.580 14:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:38.580 14:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:38.580 14:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:38.580 14:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:38.580 14:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:38.580 14:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:38.580 14:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:38.580 14:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.580 14:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.580 14:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.580 14:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.580 14:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.580 14:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.580 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.580 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.580 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.580 14:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.580 "name": "Existed_Raid", 00:07:38.580 "uuid": "d4d7b8f0-8610-4a7e-a817-75e8df62204d", 00:07:38.580 "strip_size_kb": 64, 00:07:38.580 "state": "configuring", 00:07:38.580 "raid_level": "concat", 00:07:38.580 "superblock": true, 00:07:38.580 "num_base_bdevs": 2, 00:07:38.580 "num_base_bdevs_discovered": 1, 00:07:38.580 "num_base_bdevs_operational": 2, 00:07:38.580 "base_bdevs_list": [ 00:07:38.580 { 00:07:38.580 "name": "BaseBdev1", 00:07:38.580 "uuid": "fa5d116b-9e7a-41a3-a01a-fd94bcc7d96e", 00:07:38.580 "is_configured": true, 00:07:38.580 "data_offset": 2048, 00:07:38.580 "data_size": 63488 00:07:38.580 }, 00:07:38.580 { 00:07:38.580 "name": "BaseBdev2", 00:07:38.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.580 "is_configured": false, 00:07:38.580 "data_offset": 0, 00:07:38.580 "data_size": 0 00:07:38.580 } 00:07:38.580 ] 00:07:38.580 }' 00:07:38.580 14:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.580 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.839 14:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:38.839 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.839 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.100 [2024-12-09 14:40:16.993209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:39.100 [2024-12-09 14:40:16.993555] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:39.100 [2024-12-09 14:40:16.993632] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:39.100 [2024-12-09 14:40:16.993915] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:39.100 [2024-12-09 14:40:16.994168] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:39.100 [2024-12-09 14:40:16.994221] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:39.100 BaseBdev2 00:07:39.100 [2024-12-09 14:40:16.994426] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:39.100 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.100 14:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:39.100 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:39.100 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:39.100 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:39.100 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:39.100 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:39.100 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:39.100 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.100 14:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.100 14:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.100 14:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:39.100 14:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.100 14:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.100 [ 00:07:39.100 { 00:07:39.100 "name": "BaseBdev2", 00:07:39.100 "aliases": [ 00:07:39.100 "3be682e7-4a4f-4f9c-83bd-8a484ab98f88" 00:07:39.100 ], 00:07:39.100 "product_name": "Malloc disk", 00:07:39.100 "block_size": 512, 00:07:39.100 "num_blocks": 65536, 00:07:39.100 "uuid": "3be682e7-4a4f-4f9c-83bd-8a484ab98f88", 00:07:39.100 "assigned_rate_limits": { 00:07:39.100 "rw_ios_per_sec": 0, 00:07:39.100 "rw_mbytes_per_sec": 0, 00:07:39.100 "r_mbytes_per_sec": 0, 00:07:39.100 "w_mbytes_per_sec": 0 00:07:39.100 }, 00:07:39.100 "claimed": true, 00:07:39.100 "claim_type": "exclusive_write", 00:07:39.100 "zoned": false, 00:07:39.100 "supported_io_types": { 00:07:39.100 "read": true, 00:07:39.100 "write": true, 00:07:39.100 "unmap": true, 00:07:39.100 "flush": true, 00:07:39.100 "reset": true, 00:07:39.100 "nvme_admin": false, 00:07:39.100 "nvme_io": false, 00:07:39.100 "nvme_io_md": false, 00:07:39.100 "write_zeroes": true, 00:07:39.100 "zcopy": true, 00:07:39.100 "get_zone_info": false, 00:07:39.100 "zone_management": false, 00:07:39.100 "zone_append": false, 00:07:39.100 "compare": false, 00:07:39.100 "compare_and_write": false, 00:07:39.100 "abort": true, 00:07:39.100 "seek_hole": false, 00:07:39.100 "seek_data": false, 00:07:39.100 "copy": true, 00:07:39.100 "nvme_iov_md": false 00:07:39.100 }, 00:07:39.100 "memory_domains": [ 00:07:39.100 { 00:07:39.100 "dma_device_id": "system", 00:07:39.100 "dma_device_type": 1 00:07:39.100 }, 00:07:39.100 { 00:07:39.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.100 "dma_device_type": 2 00:07:39.100 } 00:07:39.100 ], 00:07:39.100 "driver_specific": {} 00:07:39.100 } 00:07:39.100 ] 00:07:39.100 14:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.100 14:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:39.100 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:39.100 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:39.100 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:39.100 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:39.100 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:39.100 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:39.100 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.100 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.100 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.100 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.100 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.100 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.100 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.100 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:39.100 14:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.100 14:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.100 14:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.100 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.100 "name": "Existed_Raid", 00:07:39.100 "uuid": "d4d7b8f0-8610-4a7e-a817-75e8df62204d", 00:07:39.100 "strip_size_kb": 64, 00:07:39.100 "state": "online", 00:07:39.100 "raid_level": "concat", 00:07:39.100 "superblock": true, 00:07:39.100 "num_base_bdevs": 2, 00:07:39.100 "num_base_bdevs_discovered": 2, 00:07:39.100 "num_base_bdevs_operational": 2, 00:07:39.100 "base_bdevs_list": [ 00:07:39.100 { 00:07:39.100 "name": "BaseBdev1", 00:07:39.100 "uuid": "fa5d116b-9e7a-41a3-a01a-fd94bcc7d96e", 00:07:39.100 "is_configured": true, 00:07:39.100 "data_offset": 2048, 00:07:39.100 "data_size": 63488 00:07:39.100 }, 00:07:39.100 { 00:07:39.100 "name": "BaseBdev2", 00:07:39.100 "uuid": "3be682e7-4a4f-4f9c-83bd-8a484ab98f88", 00:07:39.100 "is_configured": true, 00:07:39.100 "data_offset": 2048, 00:07:39.100 "data_size": 63488 00:07:39.100 } 00:07:39.100 ] 00:07:39.100 }' 00:07:39.100 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.100 14:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.670 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:39.670 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:39.670 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:39.670 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:39.670 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:39.670 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:39.670 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:39.670 14:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.670 14:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.670 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:39.670 [2024-12-09 14:40:17.504729] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:39.670 14:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.670 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:39.670 "name": "Existed_Raid", 00:07:39.670 "aliases": [ 00:07:39.670 "d4d7b8f0-8610-4a7e-a817-75e8df62204d" 00:07:39.670 ], 00:07:39.670 "product_name": "Raid Volume", 00:07:39.670 "block_size": 512, 00:07:39.670 "num_blocks": 126976, 00:07:39.670 "uuid": "d4d7b8f0-8610-4a7e-a817-75e8df62204d", 00:07:39.670 "assigned_rate_limits": { 00:07:39.670 "rw_ios_per_sec": 0, 00:07:39.670 "rw_mbytes_per_sec": 0, 00:07:39.670 "r_mbytes_per_sec": 0, 00:07:39.670 "w_mbytes_per_sec": 0 00:07:39.670 }, 00:07:39.670 "claimed": false, 00:07:39.670 "zoned": false, 00:07:39.670 "supported_io_types": { 00:07:39.670 "read": true, 00:07:39.670 "write": true, 00:07:39.670 "unmap": true, 00:07:39.670 "flush": true, 00:07:39.670 "reset": true, 00:07:39.670 "nvme_admin": false, 00:07:39.670 "nvme_io": false, 00:07:39.670 "nvme_io_md": false, 00:07:39.670 "write_zeroes": true, 00:07:39.670 "zcopy": false, 00:07:39.670 "get_zone_info": false, 00:07:39.670 "zone_management": false, 00:07:39.670 "zone_append": false, 00:07:39.670 "compare": false, 00:07:39.670 "compare_and_write": false, 00:07:39.670 "abort": false, 00:07:39.670 "seek_hole": false, 00:07:39.670 "seek_data": false, 00:07:39.670 "copy": false, 00:07:39.670 "nvme_iov_md": false 00:07:39.670 }, 00:07:39.670 "memory_domains": [ 00:07:39.670 { 00:07:39.670 "dma_device_id": "system", 00:07:39.670 "dma_device_type": 1 00:07:39.670 }, 00:07:39.670 { 00:07:39.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.670 "dma_device_type": 2 00:07:39.670 }, 00:07:39.670 { 00:07:39.670 "dma_device_id": "system", 00:07:39.670 "dma_device_type": 1 00:07:39.670 }, 00:07:39.670 { 00:07:39.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.670 "dma_device_type": 2 00:07:39.670 } 00:07:39.670 ], 00:07:39.670 "driver_specific": { 00:07:39.670 "raid": { 00:07:39.670 "uuid": "d4d7b8f0-8610-4a7e-a817-75e8df62204d", 00:07:39.670 "strip_size_kb": 64, 00:07:39.670 "state": "online", 00:07:39.670 "raid_level": "concat", 00:07:39.670 "superblock": true, 00:07:39.670 "num_base_bdevs": 2, 00:07:39.670 "num_base_bdevs_discovered": 2, 00:07:39.670 "num_base_bdevs_operational": 2, 00:07:39.670 "base_bdevs_list": [ 00:07:39.670 { 00:07:39.670 "name": "BaseBdev1", 00:07:39.670 "uuid": "fa5d116b-9e7a-41a3-a01a-fd94bcc7d96e", 00:07:39.670 "is_configured": true, 00:07:39.670 "data_offset": 2048, 00:07:39.670 "data_size": 63488 00:07:39.670 }, 00:07:39.670 { 00:07:39.670 "name": "BaseBdev2", 00:07:39.670 "uuid": "3be682e7-4a4f-4f9c-83bd-8a484ab98f88", 00:07:39.670 "is_configured": true, 00:07:39.670 "data_offset": 2048, 00:07:39.670 "data_size": 63488 00:07:39.670 } 00:07:39.670 ] 00:07:39.670 } 00:07:39.670 } 00:07:39.670 }' 00:07:39.670 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:39.670 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:39.670 BaseBdev2' 00:07:39.670 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:39.670 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:39.670 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:39.670 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:39.670 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:39.670 14:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.670 14:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.670 14:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.670 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:39.670 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:39.670 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:39.671 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:39.671 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:39.671 14:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.671 14:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.671 14:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.671 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:39.671 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:39.671 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:39.671 14:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.671 14:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.671 [2024-12-09 14:40:17.736033] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:39.671 [2024-12-09 14:40:17.736069] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:39.671 [2024-12-09 14:40:17.736122] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:39.930 14:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.930 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:39.930 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:39.930 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:39.930 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:39.930 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:39.930 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:39.930 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:39.930 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:39.931 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:39.931 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.931 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:39.931 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.931 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.931 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.931 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.931 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.931 14:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.931 14:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.931 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:39.931 14:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.931 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.931 "name": "Existed_Raid", 00:07:39.931 "uuid": "d4d7b8f0-8610-4a7e-a817-75e8df62204d", 00:07:39.931 "strip_size_kb": 64, 00:07:39.931 "state": "offline", 00:07:39.931 "raid_level": "concat", 00:07:39.931 "superblock": true, 00:07:39.931 "num_base_bdevs": 2, 00:07:39.931 "num_base_bdevs_discovered": 1, 00:07:39.931 "num_base_bdevs_operational": 1, 00:07:39.931 "base_bdevs_list": [ 00:07:39.931 { 00:07:39.931 "name": null, 00:07:39.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.931 "is_configured": false, 00:07:39.931 "data_offset": 0, 00:07:39.931 "data_size": 63488 00:07:39.931 }, 00:07:39.931 { 00:07:39.931 "name": "BaseBdev2", 00:07:39.931 "uuid": "3be682e7-4a4f-4f9c-83bd-8a484ab98f88", 00:07:39.931 "is_configured": true, 00:07:39.931 "data_offset": 2048, 00:07:39.931 "data_size": 63488 00:07:39.931 } 00:07:39.931 ] 00:07:39.931 }' 00:07:39.931 14:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.931 14:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.190 14:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:40.190 14:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:40.190 14:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.190 14:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.190 14:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.450 14:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:40.450 14:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.451 14:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:40.451 14:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:40.451 14:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:40.451 14:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.451 14:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.451 [2024-12-09 14:40:18.363175] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:40.451 [2024-12-09 14:40:18.363255] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:40.451 14:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.451 14:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:40.451 14:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:40.451 14:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.451 14:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:40.451 14:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.451 14:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.451 14:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.451 14:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:40.451 14:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:40.451 14:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:40.451 14:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63243 00:07:40.451 14:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 63243 ']' 00:07:40.451 14:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 63243 00:07:40.451 14:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:40.451 14:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.451 14:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63243 00:07:40.451 killing process with pid 63243 00:07:40.451 14:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:40.451 14:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:40.451 14:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63243' 00:07:40.451 14:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 63243 00:07:40.451 [2024-12-09 14:40:18.555069] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:40.451 14:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 63243 00:07:40.710 [2024-12-09 14:40:18.573015] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:41.694 14:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:41.694 00:07:41.694 real 0m5.030s 00:07:41.694 user 0m7.231s 00:07:41.694 sys 0m0.807s 00:07:41.694 ************************************ 00:07:41.694 END TEST raid_state_function_test_sb 00:07:41.694 ************************************ 00:07:41.694 14:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.694 14:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.694 14:40:19 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:41.694 14:40:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:41.694 14:40:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.694 14:40:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:41.694 ************************************ 00:07:41.694 START TEST raid_superblock_test 00:07:41.694 ************************************ 00:07:41.694 14:40:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:07:41.694 14:40:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:41.694 14:40:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:41.694 14:40:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:41.694 14:40:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:41.694 14:40:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:41.694 14:40:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:41.694 14:40:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:41.694 14:40:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:41.694 14:40:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:41.694 14:40:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:41.694 14:40:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:41.694 14:40:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:41.694 14:40:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:41.694 14:40:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:41.694 14:40:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:41.694 14:40:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:41.694 14:40:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63495 00:07:41.694 14:40:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:41.694 14:40:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63495 00:07:41.694 14:40:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63495 ']' 00:07:41.694 14:40:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.694 14:40:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:41.694 14:40:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.694 14:40:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:41.694 14:40:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.953 [2024-12-09 14:40:19.864381] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:07:41.953 [2024-12-09 14:40:19.864589] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63495 ] 00:07:41.953 [2024-12-09 14:40:20.019989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.213 [2024-12-09 14:40:20.133282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.472 [2024-12-09 14:40:20.337026] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:42.472 [2024-12-09 14:40:20.337075] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.731 malloc1 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.731 [2024-12-09 14:40:20.751570] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:42.731 [2024-12-09 14:40:20.751691] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:42.731 [2024-12-09 14:40:20.751733] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:42.731 [2024-12-09 14:40:20.751764] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:42.731 [2024-12-09 14:40:20.753871] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:42.731 [2024-12-09 14:40:20.753943] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:42.731 pt1 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.731 malloc2 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.731 [2024-12-09 14:40:20.806921] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:42.731 [2024-12-09 14:40:20.807028] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:42.731 [2024-12-09 14:40:20.807075] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:42.731 [2024-12-09 14:40:20.807105] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:42.731 [2024-12-09 14:40:20.809146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:42.731 [2024-12-09 14:40:20.809217] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:42.731 pt2 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.731 [2024-12-09 14:40:20.818962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:42.731 [2024-12-09 14:40:20.820754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:42.731 [2024-12-09 14:40:20.820950] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:42.731 [2024-12-09 14:40:20.820998] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:42.731 [2024-12-09 14:40:20.821279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:42.731 [2024-12-09 14:40:20.821469] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:42.731 [2024-12-09 14:40:20.821513] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:42.731 [2024-12-09 14:40:20.821722] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.731 14:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.732 14:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:42.732 14:40:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.732 14:40:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.732 14:40:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.990 14:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.990 "name": "raid_bdev1", 00:07:42.990 "uuid": "9bb67285-6535-48f4-93e2-537627db01f8", 00:07:42.990 "strip_size_kb": 64, 00:07:42.990 "state": "online", 00:07:42.990 "raid_level": "concat", 00:07:42.990 "superblock": true, 00:07:42.990 "num_base_bdevs": 2, 00:07:42.990 "num_base_bdevs_discovered": 2, 00:07:42.990 "num_base_bdevs_operational": 2, 00:07:42.990 "base_bdevs_list": [ 00:07:42.990 { 00:07:42.990 "name": "pt1", 00:07:42.990 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:42.990 "is_configured": true, 00:07:42.990 "data_offset": 2048, 00:07:42.990 "data_size": 63488 00:07:42.990 }, 00:07:42.990 { 00:07:42.990 "name": "pt2", 00:07:42.990 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:42.990 "is_configured": true, 00:07:42.990 "data_offset": 2048, 00:07:42.990 "data_size": 63488 00:07:42.990 } 00:07:42.990 ] 00:07:42.990 }' 00:07:42.990 14:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.990 14:40:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.249 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:43.249 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:43.249 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:43.249 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:43.249 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:43.249 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:43.249 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:43.249 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:43.249 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.249 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.249 [2024-12-09 14:40:21.298453] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:43.249 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.249 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:43.249 "name": "raid_bdev1", 00:07:43.249 "aliases": [ 00:07:43.249 "9bb67285-6535-48f4-93e2-537627db01f8" 00:07:43.249 ], 00:07:43.249 "product_name": "Raid Volume", 00:07:43.249 "block_size": 512, 00:07:43.249 "num_blocks": 126976, 00:07:43.249 "uuid": "9bb67285-6535-48f4-93e2-537627db01f8", 00:07:43.249 "assigned_rate_limits": { 00:07:43.249 "rw_ios_per_sec": 0, 00:07:43.249 "rw_mbytes_per_sec": 0, 00:07:43.249 "r_mbytes_per_sec": 0, 00:07:43.249 "w_mbytes_per_sec": 0 00:07:43.249 }, 00:07:43.249 "claimed": false, 00:07:43.249 "zoned": false, 00:07:43.249 "supported_io_types": { 00:07:43.249 "read": true, 00:07:43.249 "write": true, 00:07:43.249 "unmap": true, 00:07:43.249 "flush": true, 00:07:43.249 "reset": true, 00:07:43.249 "nvme_admin": false, 00:07:43.249 "nvme_io": false, 00:07:43.249 "nvme_io_md": false, 00:07:43.249 "write_zeroes": true, 00:07:43.249 "zcopy": false, 00:07:43.249 "get_zone_info": false, 00:07:43.249 "zone_management": false, 00:07:43.249 "zone_append": false, 00:07:43.249 "compare": false, 00:07:43.249 "compare_and_write": false, 00:07:43.249 "abort": false, 00:07:43.249 "seek_hole": false, 00:07:43.249 "seek_data": false, 00:07:43.249 "copy": false, 00:07:43.249 "nvme_iov_md": false 00:07:43.249 }, 00:07:43.249 "memory_domains": [ 00:07:43.249 { 00:07:43.249 "dma_device_id": "system", 00:07:43.249 "dma_device_type": 1 00:07:43.249 }, 00:07:43.249 { 00:07:43.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.249 "dma_device_type": 2 00:07:43.249 }, 00:07:43.249 { 00:07:43.249 "dma_device_id": "system", 00:07:43.249 "dma_device_type": 1 00:07:43.249 }, 00:07:43.249 { 00:07:43.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.249 "dma_device_type": 2 00:07:43.249 } 00:07:43.249 ], 00:07:43.249 "driver_specific": { 00:07:43.249 "raid": { 00:07:43.249 "uuid": "9bb67285-6535-48f4-93e2-537627db01f8", 00:07:43.249 "strip_size_kb": 64, 00:07:43.249 "state": "online", 00:07:43.249 "raid_level": "concat", 00:07:43.249 "superblock": true, 00:07:43.249 "num_base_bdevs": 2, 00:07:43.249 "num_base_bdevs_discovered": 2, 00:07:43.249 "num_base_bdevs_operational": 2, 00:07:43.249 "base_bdevs_list": [ 00:07:43.249 { 00:07:43.249 "name": "pt1", 00:07:43.249 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:43.249 "is_configured": true, 00:07:43.249 "data_offset": 2048, 00:07:43.249 "data_size": 63488 00:07:43.249 }, 00:07:43.249 { 00:07:43.249 "name": "pt2", 00:07:43.249 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:43.249 "is_configured": true, 00:07:43.249 "data_offset": 2048, 00:07:43.249 "data_size": 63488 00:07:43.249 } 00:07:43.249 ] 00:07:43.249 } 00:07:43.249 } 00:07:43.249 }' 00:07:43.249 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:43.508 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:43.508 pt2' 00:07:43.508 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.509 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:43.509 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:43.509 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:43.509 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.509 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.509 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.509 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.509 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:43.509 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:43.509 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:43.509 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.509 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:43.509 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.509 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.509 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.509 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:43.509 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:43.509 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:43.509 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:43.509 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.509 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.509 [2024-12-09 14:40:21.545990] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:43.509 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.509 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9bb67285-6535-48f4-93e2-537627db01f8 00:07:43.509 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9bb67285-6535-48f4-93e2-537627db01f8 ']' 00:07:43.509 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:43.509 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.509 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.509 [2024-12-09 14:40:21.593645] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:43.509 [2024-12-09 14:40:21.593716] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:43.509 [2024-12-09 14:40:21.593829] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:43.509 [2024-12-09 14:40:21.593903] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:43.509 [2024-12-09 14:40:21.593948] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:43.509 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.509 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.509 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:43.509 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.509 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.509 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.768 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:43.768 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:43.768 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.769 [2024-12-09 14:40:21.713488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:43.769 [2024-12-09 14:40:21.715531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:43.769 [2024-12-09 14:40:21.715665] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:43.769 [2024-12-09 14:40:21.715727] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:43.769 [2024-12-09 14:40:21.715743] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:43.769 [2024-12-09 14:40:21.715753] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:43.769 request: 00:07:43.769 { 00:07:43.769 "name": "raid_bdev1", 00:07:43.769 "raid_level": "concat", 00:07:43.769 "base_bdevs": [ 00:07:43.769 "malloc1", 00:07:43.769 "malloc2" 00:07:43.769 ], 00:07:43.769 "strip_size_kb": 64, 00:07:43.769 "superblock": false, 00:07:43.769 "method": "bdev_raid_create", 00:07:43.769 "req_id": 1 00:07:43.769 } 00:07:43.769 Got JSON-RPC error response 00:07:43.769 response: 00:07:43.769 { 00:07:43.769 "code": -17, 00:07:43.769 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:43.769 } 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.769 [2024-12-09 14:40:21.773330] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:43.769 [2024-12-09 14:40:21.773441] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:43.769 [2024-12-09 14:40:21.773477] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:43.769 [2024-12-09 14:40:21.773533] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:43.769 [2024-12-09 14:40:21.775907] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:43.769 [2024-12-09 14:40:21.775985] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:43.769 [2024-12-09 14:40:21.776101] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:43.769 [2024-12-09 14:40:21.776201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:43.769 pt1 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.769 "name": "raid_bdev1", 00:07:43.769 "uuid": "9bb67285-6535-48f4-93e2-537627db01f8", 00:07:43.769 "strip_size_kb": 64, 00:07:43.769 "state": "configuring", 00:07:43.769 "raid_level": "concat", 00:07:43.769 "superblock": true, 00:07:43.769 "num_base_bdevs": 2, 00:07:43.769 "num_base_bdevs_discovered": 1, 00:07:43.769 "num_base_bdevs_operational": 2, 00:07:43.769 "base_bdevs_list": [ 00:07:43.769 { 00:07:43.769 "name": "pt1", 00:07:43.769 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:43.769 "is_configured": true, 00:07:43.769 "data_offset": 2048, 00:07:43.769 "data_size": 63488 00:07:43.769 }, 00:07:43.769 { 00:07:43.769 "name": null, 00:07:43.769 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:43.769 "is_configured": false, 00:07:43.769 "data_offset": 2048, 00:07:43.769 "data_size": 63488 00:07:43.769 } 00:07:43.769 ] 00:07:43.769 }' 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.769 14:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.338 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:44.338 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:44.338 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:44.338 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:44.338 14:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.338 14:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.338 [2024-12-09 14:40:22.236560] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:44.338 [2024-12-09 14:40:22.236705] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:44.338 [2024-12-09 14:40:22.236756] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:44.338 [2024-12-09 14:40:22.236803] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:44.338 [2024-12-09 14:40:22.237273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:44.338 [2024-12-09 14:40:22.237334] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:44.338 [2024-12-09 14:40:22.237452] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:44.338 [2024-12-09 14:40:22.237508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:44.338 [2024-12-09 14:40:22.237661] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:44.338 [2024-12-09 14:40:22.237702] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:44.338 [2024-12-09 14:40:22.237952] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:44.338 [2024-12-09 14:40:22.238149] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:44.338 [2024-12-09 14:40:22.238189] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:44.338 [2024-12-09 14:40:22.238382] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:44.338 pt2 00:07:44.338 14:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.338 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:44.338 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:44.338 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:44.338 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:44.338 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:44.338 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:44.338 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.338 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.338 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.338 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.338 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.338 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.338 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.338 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:44.338 14:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.338 14:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.338 14:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.338 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.338 "name": "raid_bdev1", 00:07:44.338 "uuid": "9bb67285-6535-48f4-93e2-537627db01f8", 00:07:44.338 "strip_size_kb": 64, 00:07:44.338 "state": "online", 00:07:44.338 "raid_level": "concat", 00:07:44.338 "superblock": true, 00:07:44.338 "num_base_bdevs": 2, 00:07:44.338 "num_base_bdevs_discovered": 2, 00:07:44.338 "num_base_bdevs_operational": 2, 00:07:44.338 "base_bdevs_list": [ 00:07:44.338 { 00:07:44.338 "name": "pt1", 00:07:44.338 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:44.338 "is_configured": true, 00:07:44.338 "data_offset": 2048, 00:07:44.338 "data_size": 63488 00:07:44.338 }, 00:07:44.338 { 00:07:44.338 "name": "pt2", 00:07:44.338 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:44.338 "is_configured": true, 00:07:44.338 "data_offset": 2048, 00:07:44.338 "data_size": 63488 00:07:44.338 } 00:07:44.338 ] 00:07:44.338 }' 00:07:44.338 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.338 14:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.598 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:44.598 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:44.599 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:44.599 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:44.599 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:44.599 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:44.599 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:44.599 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:44.599 14:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.599 14:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.599 [2024-12-09 14:40:22.584195] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:44.599 14:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.599 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:44.599 "name": "raid_bdev1", 00:07:44.599 "aliases": [ 00:07:44.599 "9bb67285-6535-48f4-93e2-537627db01f8" 00:07:44.599 ], 00:07:44.599 "product_name": "Raid Volume", 00:07:44.599 "block_size": 512, 00:07:44.599 "num_blocks": 126976, 00:07:44.599 "uuid": "9bb67285-6535-48f4-93e2-537627db01f8", 00:07:44.599 "assigned_rate_limits": { 00:07:44.599 "rw_ios_per_sec": 0, 00:07:44.599 "rw_mbytes_per_sec": 0, 00:07:44.599 "r_mbytes_per_sec": 0, 00:07:44.599 "w_mbytes_per_sec": 0 00:07:44.599 }, 00:07:44.599 "claimed": false, 00:07:44.599 "zoned": false, 00:07:44.599 "supported_io_types": { 00:07:44.599 "read": true, 00:07:44.599 "write": true, 00:07:44.599 "unmap": true, 00:07:44.599 "flush": true, 00:07:44.599 "reset": true, 00:07:44.599 "nvme_admin": false, 00:07:44.599 "nvme_io": false, 00:07:44.599 "nvme_io_md": false, 00:07:44.599 "write_zeroes": true, 00:07:44.599 "zcopy": false, 00:07:44.599 "get_zone_info": false, 00:07:44.599 "zone_management": false, 00:07:44.599 "zone_append": false, 00:07:44.599 "compare": false, 00:07:44.599 "compare_and_write": false, 00:07:44.599 "abort": false, 00:07:44.599 "seek_hole": false, 00:07:44.599 "seek_data": false, 00:07:44.599 "copy": false, 00:07:44.599 "nvme_iov_md": false 00:07:44.599 }, 00:07:44.599 "memory_domains": [ 00:07:44.599 { 00:07:44.599 "dma_device_id": "system", 00:07:44.599 "dma_device_type": 1 00:07:44.599 }, 00:07:44.599 { 00:07:44.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.599 "dma_device_type": 2 00:07:44.599 }, 00:07:44.599 { 00:07:44.599 "dma_device_id": "system", 00:07:44.599 "dma_device_type": 1 00:07:44.599 }, 00:07:44.599 { 00:07:44.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.599 "dma_device_type": 2 00:07:44.599 } 00:07:44.599 ], 00:07:44.599 "driver_specific": { 00:07:44.599 "raid": { 00:07:44.599 "uuid": "9bb67285-6535-48f4-93e2-537627db01f8", 00:07:44.599 "strip_size_kb": 64, 00:07:44.599 "state": "online", 00:07:44.599 "raid_level": "concat", 00:07:44.599 "superblock": true, 00:07:44.599 "num_base_bdevs": 2, 00:07:44.599 "num_base_bdevs_discovered": 2, 00:07:44.599 "num_base_bdevs_operational": 2, 00:07:44.599 "base_bdevs_list": [ 00:07:44.599 { 00:07:44.599 "name": "pt1", 00:07:44.599 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:44.599 "is_configured": true, 00:07:44.599 "data_offset": 2048, 00:07:44.599 "data_size": 63488 00:07:44.599 }, 00:07:44.599 { 00:07:44.599 "name": "pt2", 00:07:44.599 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:44.599 "is_configured": true, 00:07:44.599 "data_offset": 2048, 00:07:44.599 "data_size": 63488 00:07:44.599 } 00:07:44.599 ] 00:07:44.599 } 00:07:44.599 } 00:07:44.599 }' 00:07:44.599 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:44.599 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:44.599 pt2' 00:07:44.599 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.599 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:44.599 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:44.599 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:44.599 14:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.599 14:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.599 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.859 14:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.859 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:44.859 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:44.859 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:44.859 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.859 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:44.859 14:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.859 14:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.859 14:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.859 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:44.859 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:44.859 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:44.859 14:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.859 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:44.859 14:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.859 [2024-12-09 14:40:22.779901] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:44.859 14:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.859 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9bb67285-6535-48f4-93e2-537627db01f8 '!=' 9bb67285-6535-48f4-93e2-537627db01f8 ']' 00:07:44.859 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:44.859 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:44.859 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:44.859 14:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63495 00:07:44.859 14:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63495 ']' 00:07:44.859 14:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63495 00:07:44.859 14:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:44.859 14:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:44.859 14:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63495 00:07:44.859 killing process with pid 63495 00:07:44.859 14:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:44.859 14:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:44.859 14:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63495' 00:07:44.859 14:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63495 00:07:44.859 [2024-12-09 14:40:22.860947] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:44.859 [2024-12-09 14:40:22.861043] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:44.859 [2024-12-09 14:40:22.861091] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:44.859 [2024-12-09 14:40:22.861102] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:44.859 14:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63495 00:07:45.119 [2024-12-09 14:40:23.070607] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:46.499 14:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:46.499 00:07:46.499 real 0m4.410s 00:07:46.499 user 0m6.201s 00:07:46.499 sys 0m0.701s 00:07:46.499 14:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.499 ************************************ 00:07:46.499 END TEST raid_superblock_test 00:07:46.499 ************************************ 00:07:46.499 14:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.499 14:40:24 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:46.499 14:40:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:46.499 14:40:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.499 14:40:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:46.499 ************************************ 00:07:46.499 START TEST raid_read_error_test 00:07:46.499 ************************************ 00:07:46.499 14:40:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:07:46.499 14:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:46.499 14:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:46.499 14:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:46.499 14:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:46.499 14:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:46.499 14:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:46.499 14:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:46.499 14:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:46.499 14:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:46.499 14:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:46.499 14:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:46.499 14:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:46.499 14:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:46.499 14:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:46.499 14:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:46.499 14:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:46.499 14:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:46.499 14:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:46.499 14:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:46.499 14:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:46.499 14:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:46.499 14:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:46.499 14:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.kpwA9c6hsx 00:07:46.499 14:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63701 00:07:46.499 14:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:46.499 14:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63701 00:07:46.499 14:40:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63701 ']' 00:07:46.499 14:40:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.499 14:40:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:46.499 14:40:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.499 14:40:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:46.499 14:40:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.499 [2024-12-09 14:40:24.361563] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:07:46.499 [2024-12-09 14:40:24.361783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63701 ] 00:07:46.499 [2024-12-09 14:40:24.519283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.759 [2024-12-09 14:40:24.634162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.759 [2024-12-09 14:40:24.830259] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.759 [2024-12-09 14:40:24.830416] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.327 BaseBdev1_malloc 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.327 true 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.327 [2024-12-09 14:40:25.249210] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:47.327 [2024-12-09 14:40:25.249324] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:47.327 [2024-12-09 14:40:25.249365] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:47.327 [2024-12-09 14:40:25.249397] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:47.327 [2024-12-09 14:40:25.251564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:47.327 [2024-12-09 14:40:25.251658] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:47.327 BaseBdev1 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.327 BaseBdev2_malloc 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.327 true 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.327 [2024-12-09 14:40:25.314843] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:47.327 [2024-12-09 14:40:25.314905] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:47.327 [2024-12-09 14:40:25.314922] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:47.327 [2024-12-09 14:40:25.314932] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:47.327 [2024-12-09 14:40:25.317056] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:47.327 [2024-12-09 14:40:25.317210] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:47.327 BaseBdev2 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.327 [2024-12-09 14:40:25.326886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:47.327 [2024-12-09 14:40:25.328770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:47.327 [2024-12-09 14:40:25.329017] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:47.327 [2024-12-09 14:40:25.329070] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:47.327 [2024-12-09 14:40:25.329321] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:47.327 [2024-12-09 14:40:25.329538] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:47.327 [2024-12-09 14:40:25.329599] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:47.327 [2024-12-09 14:40:25.329815] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.327 "name": "raid_bdev1", 00:07:47.327 "uuid": "3ba7e676-3e53-4ae4-b331-3bc4939a3c96", 00:07:47.327 "strip_size_kb": 64, 00:07:47.327 "state": "online", 00:07:47.327 "raid_level": "concat", 00:07:47.327 "superblock": true, 00:07:47.327 "num_base_bdevs": 2, 00:07:47.327 "num_base_bdevs_discovered": 2, 00:07:47.327 "num_base_bdevs_operational": 2, 00:07:47.327 "base_bdevs_list": [ 00:07:47.327 { 00:07:47.327 "name": "BaseBdev1", 00:07:47.327 "uuid": "78c6f48f-b894-5e01-adaa-f6985b534e8c", 00:07:47.327 "is_configured": true, 00:07:47.327 "data_offset": 2048, 00:07:47.327 "data_size": 63488 00:07:47.327 }, 00:07:47.327 { 00:07:47.327 "name": "BaseBdev2", 00:07:47.327 "uuid": "4610d71b-4c49-5ce6-8bab-7742840eb4ce", 00:07:47.327 "is_configured": true, 00:07:47.327 "data_offset": 2048, 00:07:47.327 "data_size": 63488 00:07:47.327 } 00:07:47.327 ] 00:07:47.327 }' 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.327 14:40:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.894 14:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:47.894 14:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:47.894 [2024-12-09 14:40:25.863453] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:48.831 14:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:48.831 14:40:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.831 14:40:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.831 14:40:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.831 14:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:48.831 14:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:48.831 14:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:48.831 14:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:48.831 14:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:48.831 14:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:48.831 14:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:48.831 14:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.831 14:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:48.831 14:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.831 14:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.831 14:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.831 14:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.831 14:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.831 14:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:48.831 14:40:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.831 14:40:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.831 14:40:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.831 14:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.831 "name": "raid_bdev1", 00:07:48.831 "uuid": "3ba7e676-3e53-4ae4-b331-3bc4939a3c96", 00:07:48.831 "strip_size_kb": 64, 00:07:48.831 "state": "online", 00:07:48.831 "raid_level": "concat", 00:07:48.831 "superblock": true, 00:07:48.831 "num_base_bdevs": 2, 00:07:48.831 "num_base_bdevs_discovered": 2, 00:07:48.831 "num_base_bdevs_operational": 2, 00:07:48.831 "base_bdevs_list": [ 00:07:48.831 { 00:07:48.831 "name": "BaseBdev1", 00:07:48.831 "uuid": "78c6f48f-b894-5e01-adaa-f6985b534e8c", 00:07:48.831 "is_configured": true, 00:07:48.831 "data_offset": 2048, 00:07:48.831 "data_size": 63488 00:07:48.831 }, 00:07:48.831 { 00:07:48.831 "name": "BaseBdev2", 00:07:48.831 "uuid": "4610d71b-4c49-5ce6-8bab-7742840eb4ce", 00:07:48.831 "is_configured": true, 00:07:48.831 "data_offset": 2048, 00:07:48.831 "data_size": 63488 00:07:48.831 } 00:07:48.831 ] 00:07:48.831 }' 00:07:48.831 14:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.831 14:40:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.400 14:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:49.400 14:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.400 14:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.400 [2024-12-09 14:40:27.231291] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:49.400 [2024-12-09 14:40:27.231408] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:49.400 [2024-12-09 14:40:27.234244] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:49.400 [2024-12-09 14:40:27.234294] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:49.400 [2024-12-09 14:40:27.234328] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:49.400 [2024-12-09 14:40:27.234342] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:49.400 { 00:07:49.400 "results": [ 00:07:49.400 { 00:07:49.400 "job": "raid_bdev1", 00:07:49.400 "core_mask": "0x1", 00:07:49.400 "workload": "randrw", 00:07:49.400 "percentage": 50, 00:07:49.400 "status": "finished", 00:07:49.400 "queue_depth": 1, 00:07:49.400 "io_size": 131072, 00:07:49.400 "runtime": 1.368947, 00:07:49.400 "iops": 15401.618908547956, 00:07:49.400 "mibps": 1925.2023635684945, 00:07:49.400 "io_failed": 1, 00:07:49.400 "io_timeout": 0, 00:07:49.400 "avg_latency_us": 89.92173736373776, 00:07:49.400 "min_latency_us": 27.053275109170304, 00:07:49.400 "max_latency_us": 1402.2986899563318 00:07:49.400 } 00:07:49.400 ], 00:07:49.400 "core_count": 1 00:07:49.400 } 00:07:49.400 14:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.400 14:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63701 00:07:49.400 14:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63701 ']' 00:07:49.400 14:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63701 00:07:49.400 14:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:49.400 14:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:49.400 14:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63701 00:07:49.400 killing process with pid 63701 00:07:49.400 14:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:49.400 14:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:49.400 14:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63701' 00:07:49.400 14:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63701 00:07:49.400 [2024-12-09 14:40:27.285227] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:49.400 14:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63701 00:07:49.400 [2024-12-09 14:40:27.428447] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:50.782 14:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.kpwA9c6hsx 00:07:50.782 14:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:50.782 14:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:50.782 14:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:50.782 14:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:50.782 14:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:50.782 14:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:50.782 14:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:50.782 00:07:50.782 real 0m4.365s 00:07:50.782 user 0m5.205s 00:07:50.782 sys 0m0.537s 00:07:50.782 14:40:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.782 14:40:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.782 ************************************ 00:07:50.782 END TEST raid_read_error_test 00:07:50.782 ************************************ 00:07:50.782 14:40:28 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:50.782 14:40:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:50.782 14:40:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.782 14:40:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:50.782 ************************************ 00:07:50.782 START TEST raid_write_error_test 00:07:50.782 ************************************ 00:07:50.782 14:40:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:07:50.782 14:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:50.782 14:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:50.782 14:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:50.782 14:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:50.782 14:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:50.782 14:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:50.782 14:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:50.782 14:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:50.782 14:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:50.782 14:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:50.782 14:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:50.782 14:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:50.782 14:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:50.782 14:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:50.782 14:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:50.782 14:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:50.782 14:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:50.782 14:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:50.782 14:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:50.782 14:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:50.782 14:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:50.782 14:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:50.782 14:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.uHaFaU53ng 00:07:50.782 14:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63847 00:07:50.782 14:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63847 00:07:50.782 14:40:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:50.782 14:40:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63847 ']' 00:07:50.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.782 14:40:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.782 14:40:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.782 14:40:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.782 14:40:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.782 14:40:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.782 [2024-12-09 14:40:28.791004] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:07:50.782 [2024-12-09 14:40:28.791116] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63847 ] 00:07:51.042 [2024-12-09 14:40:28.965810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.042 [2024-12-09 14:40:29.086262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.302 [2024-12-09 14:40:29.287116] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:51.302 [2024-12-09 14:40:29.287168] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:51.562 14:40:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:51.562 14:40:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:51.562 14:40:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:51.562 14:40:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:51.562 14:40:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.562 14:40:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.562 BaseBdev1_malloc 00:07:51.562 14:40:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.820 14:40:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:51.820 14:40:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.820 14:40:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.820 true 00:07:51.820 14:40:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.820 14:40:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:51.820 14:40:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.820 14:40:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.820 [2024-12-09 14:40:29.699748] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:51.821 [2024-12-09 14:40:29.699809] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:51.821 [2024-12-09 14:40:29.699831] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:51.821 [2024-12-09 14:40:29.699842] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:51.821 [2024-12-09 14:40:29.702015] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:51.821 [2024-12-09 14:40:29.702058] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:51.821 BaseBdev1 00:07:51.821 14:40:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.821 14:40:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:51.821 14:40:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:51.821 14:40:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.821 14:40:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.821 BaseBdev2_malloc 00:07:51.821 14:40:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.821 14:40:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:51.821 14:40:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.821 14:40:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.821 true 00:07:51.821 14:40:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.821 14:40:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:51.821 14:40:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.821 14:40:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.821 [2024-12-09 14:40:29.767023] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:51.821 [2024-12-09 14:40:29.767158] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:51.821 [2024-12-09 14:40:29.767186] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:51.821 [2024-12-09 14:40:29.767199] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:51.821 [2024-12-09 14:40:29.769447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:51.821 [2024-12-09 14:40:29.769489] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:51.821 BaseBdev2 00:07:51.821 14:40:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.821 14:40:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:51.821 14:40:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.821 14:40:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.821 [2024-12-09 14:40:29.779069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:51.821 [2024-12-09 14:40:29.780939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:51.821 [2024-12-09 14:40:29.781138] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:51.821 [2024-12-09 14:40:29.781154] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:51.821 [2024-12-09 14:40:29.781408] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:51.821 [2024-12-09 14:40:29.781608] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:51.821 [2024-12-09 14:40:29.781621] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:51.821 [2024-12-09 14:40:29.781812] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:51.821 14:40:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.821 14:40:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:51.821 14:40:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:51.821 14:40:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:51.821 14:40:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:51.821 14:40:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.821 14:40:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.821 14:40:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.821 14:40:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.821 14:40:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.821 14:40:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.821 14:40:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.821 14:40:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:51.821 14:40:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.821 14:40:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.821 14:40:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.821 14:40:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.821 "name": "raid_bdev1", 00:07:51.821 "uuid": "f40b2449-33d0-434f-ae4a-d9418b582c77", 00:07:51.821 "strip_size_kb": 64, 00:07:51.821 "state": "online", 00:07:51.821 "raid_level": "concat", 00:07:51.821 "superblock": true, 00:07:51.821 "num_base_bdevs": 2, 00:07:51.821 "num_base_bdevs_discovered": 2, 00:07:51.821 "num_base_bdevs_operational": 2, 00:07:51.821 "base_bdevs_list": [ 00:07:51.821 { 00:07:51.821 "name": "BaseBdev1", 00:07:51.821 "uuid": "4d114b06-58dd-5082-ad11-01e085b2fb65", 00:07:51.821 "is_configured": true, 00:07:51.821 "data_offset": 2048, 00:07:51.821 "data_size": 63488 00:07:51.821 }, 00:07:51.821 { 00:07:51.821 "name": "BaseBdev2", 00:07:51.821 "uuid": "472f5f78-41e4-5bf3-9a16-05a965af516e", 00:07:51.821 "is_configured": true, 00:07:51.821 "data_offset": 2048, 00:07:51.821 "data_size": 63488 00:07:51.821 } 00:07:51.821 ] 00:07:51.821 }' 00:07:51.821 14:40:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.821 14:40:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.390 14:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:52.390 14:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:52.390 [2024-12-09 14:40:30.347526] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:53.328 14:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:53.328 14:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.328 14:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.328 14:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.328 14:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:53.328 14:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:53.328 14:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:53.328 14:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:53.328 14:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:53.328 14:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:53.328 14:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:53.328 14:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:53.328 14:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:53.328 14:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.328 14:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.328 14:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.328 14:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.328 14:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.328 14:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:53.328 14:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.328 14:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.328 14:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.328 14:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.328 "name": "raid_bdev1", 00:07:53.328 "uuid": "f40b2449-33d0-434f-ae4a-d9418b582c77", 00:07:53.328 "strip_size_kb": 64, 00:07:53.328 "state": "online", 00:07:53.328 "raid_level": "concat", 00:07:53.328 "superblock": true, 00:07:53.328 "num_base_bdevs": 2, 00:07:53.328 "num_base_bdevs_discovered": 2, 00:07:53.328 "num_base_bdevs_operational": 2, 00:07:53.328 "base_bdevs_list": [ 00:07:53.328 { 00:07:53.328 "name": "BaseBdev1", 00:07:53.328 "uuid": "4d114b06-58dd-5082-ad11-01e085b2fb65", 00:07:53.328 "is_configured": true, 00:07:53.328 "data_offset": 2048, 00:07:53.328 "data_size": 63488 00:07:53.328 }, 00:07:53.328 { 00:07:53.328 "name": "BaseBdev2", 00:07:53.328 "uuid": "472f5f78-41e4-5bf3-9a16-05a965af516e", 00:07:53.328 "is_configured": true, 00:07:53.328 "data_offset": 2048, 00:07:53.328 "data_size": 63488 00:07:53.328 } 00:07:53.328 ] 00:07:53.328 }' 00:07:53.328 14:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.328 14:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.897 14:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:53.897 14:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.897 14:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.897 [2024-12-09 14:40:31.759958] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:53.897 [2024-12-09 14:40:31.760076] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:53.897 [2024-12-09 14:40:31.763226] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:53.897 [2024-12-09 14:40:31.763336] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:53.897 [2024-12-09 14:40:31.763394] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:53.897 [2024-12-09 14:40:31.763448] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:53.897 { 00:07:53.897 "results": [ 00:07:53.897 { 00:07:53.897 "job": "raid_bdev1", 00:07:53.897 "core_mask": "0x1", 00:07:53.897 "workload": "randrw", 00:07:53.897 "percentage": 50, 00:07:53.897 "status": "finished", 00:07:53.897 "queue_depth": 1, 00:07:53.897 "io_size": 131072, 00:07:53.897 "runtime": 1.413466, 00:07:53.897 "iops": 15171.924899502357, 00:07:53.897 "mibps": 1896.4906124377947, 00:07:53.897 "io_failed": 1, 00:07:53.897 "io_timeout": 0, 00:07:53.897 "avg_latency_us": 91.30775906338536, 00:07:53.897 "min_latency_us": 26.717903930131005, 00:07:53.897 "max_latency_us": 1445.2262008733624 00:07:53.897 } 00:07:53.897 ], 00:07:53.897 "core_count": 1 00:07:53.897 } 00:07:53.897 14:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.897 14:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63847 00:07:53.897 14:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63847 ']' 00:07:53.897 14:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63847 00:07:53.897 14:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:53.897 14:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:53.897 14:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63847 00:07:53.897 14:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:53.897 killing process with pid 63847 00:07:53.897 14:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:53.897 14:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63847' 00:07:53.897 14:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63847 00:07:53.897 [2024-12-09 14:40:31.795863] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:53.897 14:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63847 00:07:53.897 [2024-12-09 14:40:31.931995] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:55.276 14:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.uHaFaU53ng 00:07:55.276 14:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:55.276 14:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:55.276 14:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:07:55.276 14:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:55.276 14:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:55.276 14:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:55.276 14:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:07:55.276 00:07:55.276 real 0m4.462s 00:07:55.276 user 0m5.389s 00:07:55.276 sys 0m0.532s 00:07:55.276 ************************************ 00:07:55.276 END TEST raid_write_error_test 00:07:55.276 ************************************ 00:07:55.276 14:40:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.276 14:40:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.276 14:40:33 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:55.276 14:40:33 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:55.276 14:40:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:55.276 14:40:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.276 14:40:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:55.276 ************************************ 00:07:55.276 START TEST raid_state_function_test 00:07:55.276 ************************************ 00:07:55.276 14:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:07:55.276 14:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:55.276 14:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:55.276 14:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:55.276 14:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:55.276 14:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:55.276 14:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:55.276 14:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:55.276 14:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:55.276 14:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:55.276 14:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:55.276 14:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:55.276 14:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:55.276 14:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:55.276 14:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:55.276 14:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:55.276 14:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:55.276 14:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:55.276 14:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:55.276 14:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:55.276 14:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:55.276 14:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:55.276 14:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:55.276 14:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63990 00:07:55.276 14:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:55.276 14:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63990' 00:07:55.276 Process raid pid: 63990 00:07:55.276 14:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63990 00:07:55.276 14:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63990 ']' 00:07:55.276 14:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.276 14:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:55.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.276 14:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.276 14:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:55.276 14:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.276 [2024-12-09 14:40:33.311254] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:07:55.276 [2024-12-09 14:40:33.311386] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:55.535 [2024-12-09 14:40:33.487347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.535 [2024-12-09 14:40:33.605624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.794 [2024-12-09 14:40:33.805831] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:55.794 [2024-12-09 14:40:33.805965] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.054 14:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:56.054 14:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:56.054 14:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:56.054 14:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.054 14:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.054 [2024-12-09 14:40:34.161206] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:56.054 [2024-12-09 14:40:34.161271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:56.054 [2024-12-09 14:40:34.161282] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:56.054 [2024-12-09 14:40:34.161293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:56.054 14:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.054 14:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:56.054 14:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:56.054 14:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:56.054 14:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:56.054 14:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:56.054 14:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:56.054 14:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.054 14:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.054 14:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.054 14:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.054 14:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.054 14:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:56.054 14:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.054 14:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.313 14:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.313 14:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.313 "name": "Existed_Raid", 00:07:56.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.313 "strip_size_kb": 0, 00:07:56.313 "state": "configuring", 00:07:56.313 "raid_level": "raid1", 00:07:56.313 "superblock": false, 00:07:56.313 "num_base_bdevs": 2, 00:07:56.313 "num_base_bdevs_discovered": 0, 00:07:56.313 "num_base_bdevs_operational": 2, 00:07:56.313 "base_bdevs_list": [ 00:07:56.313 { 00:07:56.313 "name": "BaseBdev1", 00:07:56.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.313 "is_configured": false, 00:07:56.313 "data_offset": 0, 00:07:56.313 "data_size": 0 00:07:56.313 }, 00:07:56.313 { 00:07:56.313 "name": "BaseBdev2", 00:07:56.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.313 "is_configured": false, 00:07:56.313 "data_offset": 0, 00:07:56.313 "data_size": 0 00:07:56.313 } 00:07:56.313 ] 00:07:56.313 }' 00:07:56.313 14:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.313 14:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.573 14:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:56.573 14:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.573 14:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.573 [2024-12-09 14:40:34.600401] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:56.573 [2024-12-09 14:40:34.600503] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:56.573 14:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.573 14:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:56.573 14:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.573 14:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.573 [2024-12-09 14:40:34.612402] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:56.573 [2024-12-09 14:40:34.612523] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:56.573 [2024-12-09 14:40:34.612553] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:56.573 [2024-12-09 14:40:34.612593] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:56.573 14:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.573 14:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:56.573 14:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.573 14:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.573 [2024-12-09 14:40:34.658227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:56.573 BaseBdev1 00:07:56.573 14:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.573 14:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:56.573 14:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:56.573 14:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:56.573 14:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:56.573 14:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:56.573 14:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:56.573 14:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:56.573 14:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.573 14:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.573 14:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.573 14:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:56.573 14:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.573 14:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.573 [ 00:07:56.573 { 00:07:56.573 "name": "BaseBdev1", 00:07:56.573 "aliases": [ 00:07:56.573 "50996077-82fe-49a9-8649-6e61bf02b06c" 00:07:56.573 ], 00:07:56.573 "product_name": "Malloc disk", 00:07:56.573 "block_size": 512, 00:07:56.573 "num_blocks": 65536, 00:07:56.573 "uuid": "50996077-82fe-49a9-8649-6e61bf02b06c", 00:07:56.573 "assigned_rate_limits": { 00:07:56.573 "rw_ios_per_sec": 0, 00:07:56.573 "rw_mbytes_per_sec": 0, 00:07:56.573 "r_mbytes_per_sec": 0, 00:07:56.573 "w_mbytes_per_sec": 0 00:07:56.573 }, 00:07:56.573 "claimed": true, 00:07:56.573 "claim_type": "exclusive_write", 00:07:56.573 "zoned": false, 00:07:56.573 "supported_io_types": { 00:07:56.573 "read": true, 00:07:56.573 "write": true, 00:07:56.573 "unmap": true, 00:07:56.573 "flush": true, 00:07:56.573 "reset": true, 00:07:56.573 "nvme_admin": false, 00:07:56.573 "nvme_io": false, 00:07:56.573 "nvme_io_md": false, 00:07:56.573 "write_zeroes": true, 00:07:56.573 "zcopy": true, 00:07:56.574 "get_zone_info": false, 00:07:56.574 "zone_management": false, 00:07:56.574 "zone_append": false, 00:07:56.574 "compare": false, 00:07:56.574 "compare_and_write": false, 00:07:56.574 "abort": true, 00:07:56.574 "seek_hole": false, 00:07:56.574 "seek_data": false, 00:07:56.833 "copy": true, 00:07:56.833 "nvme_iov_md": false 00:07:56.833 }, 00:07:56.833 "memory_domains": [ 00:07:56.833 { 00:07:56.833 "dma_device_id": "system", 00:07:56.833 "dma_device_type": 1 00:07:56.833 }, 00:07:56.833 { 00:07:56.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.833 "dma_device_type": 2 00:07:56.833 } 00:07:56.833 ], 00:07:56.833 "driver_specific": {} 00:07:56.833 } 00:07:56.833 ] 00:07:56.833 14:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.833 14:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:56.833 14:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:56.833 14:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:56.833 14:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:56.833 14:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:56.833 14:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:56.833 14:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:56.833 14:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.833 14:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.833 14:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.833 14:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.833 14:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.833 14:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.833 14:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:56.833 14:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.833 14:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.833 14:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.833 "name": "Existed_Raid", 00:07:56.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.833 "strip_size_kb": 0, 00:07:56.833 "state": "configuring", 00:07:56.833 "raid_level": "raid1", 00:07:56.833 "superblock": false, 00:07:56.833 "num_base_bdevs": 2, 00:07:56.833 "num_base_bdevs_discovered": 1, 00:07:56.833 "num_base_bdevs_operational": 2, 00:07:56.833 "base_bdevs_list": [ 00:07:56.833 { 00:07:56.833 "name": "BaseBdev1", 00:07:56.833 "uuid": "50996077-82fe-49a9-8649-6e61bf02b06c", 00:07:56.833 "is_configured": true, 00:07:56.833 "data_offset": 0, 00:07:56.833 "data_size": 65536 00:07:56.833 }, 00:07:56.833 { 00:07:56.833 "name": "BaseBdev2", 00:07:56.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.833 "is_configured": false, 00:07:56.833 "data_offset": 0, 00:07:56.833 "data_size": 0 00:07:56.833 } 00:07:56.833 ] 00:07:56.833 }' 00:07:56.833 14:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.833 14:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.092 14:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:57.092 14:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.092 14:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.092 [2024-12-09 14:40:35.117528] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:57.092 [2024-12-09 14:40:35.117594] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:57.092 14:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.092 14:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:57.092 14:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.092 14:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.092 [2024-12-09 14:40:35.129530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:57.092 [2024-12-09 14:40:35.131402] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:57.092 [2024-12-09 14:40:35.131496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:57.092 14:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.092 14:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:57.092 14:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:57.092 14:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:57.092 14:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.093 14:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.093 14:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:57.093 14:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:57.093 14:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:57.093 14:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.093 14:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.093 14:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.093 14:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.093 14:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.093 14:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.093 14:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.093 14:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.093 14:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.093 14:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.093 "name": "Existed_Raid", 00:07:57.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.093 "strip_size_kb": 0, 00:07:57.093 "state": "configuring", 00:07:57.093 "raid_level": "raid1", 00:07:57.093 "superblock": false, 00:07:57.093 "num_base_bdevs": 2, 00:07:57.093 "num_base_bdevs_discovered": 1, 00:07:57.093 "num_base_bdevs_operational": 2, 00:07:57.093 "base_bdevs_list": [ 00:07:57.093 { 00:07:57.093 "name": "BaseBdev1", 00:07:57.093 "uuid": "50996077-82fe-49a9-8649-6e61bf02b06c", 00:07:57.093 "is_configured": true, 00:07:57.093 "data_offset": 0, 00:07:57.093 "data_size": 65536 00:07:57.093 }, 00:07:57.093 { 00:07:57.093 "name": "BaseBdev2", 00:07:57.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.093 "is_configured": false, 00:07:57.093 "data_offset": 0, 00:07:57.093 "data_size": 0 00:07:57.093 } 00:07:57.093 ] 00:07:57.093 }' 00:07:57.093 14:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.093 14:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.661 14:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:57.661 14:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.661 14:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.661 [2024-12-09 14:40:35.579459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:57.661 [2024-12-09 14:40:35.579656] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:57.661 [2024-12-09 14:40:35.579689] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:57.661 [2024-12-09 14:40:35.580013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:57.661 [2024-12-09 14:40:35.580267] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:57.661 [2024-12-09 14:40:35.580325] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:57.661 [2024-12-09 14:40:35.580688] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:57.661 BaseBdev2 00:07:57.661 14:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.661 14:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:57.661 14:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:57.661 14:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:57.661 14:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:57.661 14:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:57.661 14:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:57.661 14:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:57.661 14:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.661 14:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.661 14:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.661 14:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:57.661 14:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.661 14:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.661 [ 00:07:57.661 { 00:07:57.661 "name": "BaseBdev2", 00:07:57.661 "aliases": [ 00:07:57.661 "30867b0e-64ed-4399-af62-e73b22756c5c" 00:07:57.661 ], 00:07:57.661 "product_name": "Malloc disk", 00:07:57.661 "block_size": 512, 00:07:57.661 "num_blocks": 65536, 00:07:57.661 "uuid": "30867b0e-64ed-4399-af62-e73b22756c5c", 00:07:57.661 "assigned_rate_limits": { 00:07:57.661 "rw_ios_per_sec": 0, 00:07:57.661 "rw_mbytes_per_sec": 0, 00:07:57.661 "r_mbytes_per_sec": 0, 00:07:57.661 "w_mbytes_per_sec": 0 00:07:57.661 }, 00:07:57.661 "claimed": true, 00:07:57.661 "claim_type": "exclusive_write", 00:07:57.661 "zoned": false, 00:07:57.661 "supported_io_types": { 00:07:57.661 "read": true, 00:07:57.661 "write": true, 00:07:57.661 "unmap": true, 00:07:57.661 "flush": true, 00:07:57.661 "reset": true, 00:07:57.661 "nvme_admin": false, 00:07:57.661 "nvme_io": false, 00:07:57.661 "nvme_io_md": false, 00:07:57.661 "write_zeroes": true, 00:07:57.661 "zcopy": true, 00:07:57.661 "get_zone_info": false, 00:07:57.661 "zone_management": false, 00:07:57.661 "zone_append": false, 00:07:57.661 "compare": false, 00:07:57.661 "compare_and_write": false, 00:07:57.661 "abort": true, 00:07:57.661 "seek_hole": false, 00:07:57.661 "seek_data": false, 00:07:57.661 "copy": true, 00:07:57.661 "nvme_iov_md": false 00:07:57.661 }, 00:07:57.661 "memory_domains": [ 00:07:57.661 { 00:07:57.661 "dma_device_id": "system", 00:07:57.661 "dma_device_type": 1 00:07:57.661 }, 00:07:57.661 { 00:07:57.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.661 "dma_device_type": 2 00:07:57.661 } 00:07:57.661 ], 00:07:57.661 "driver_specific": {} 00:07:57.661 } 00:07:57.661 ] 00:07:57.661 14:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.661 14:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:57.661 14:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:57.661 14:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:57.661 14:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:57.661 14:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.661 14:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:57.661 14:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:57.661 14:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:57.661 14:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:57.661 14:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.661 14:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.661 14:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.661 14:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.661 14:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.661 14:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.661 14:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.661 14:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.661 14:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.661 14:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.661 "name": "Existed_Raid", 00:07:57.661 "uuid": "9af195a1-db04-421c-8916-c3f7de9c39b1", 00:07:57.661 "strip_size_kb": 0, 00:07:57.661 "state": "online", 00:07:57.661 "raid_level": "raid1", 00:07:57.661 "superblock": false, 00:07:57.661 "num_base_bdevs": 2, 00:07:57.661 "num_base_bdevs_discovered": 2, 00:07:57.661 "num_base_bdevs_operational": 2, 00:07:57.661 "base_bdevs_list": [ 00:07:57.661 { 00:07:57.661 "name": "BaseBdev1", 00:07:57.661 "uuid": "50996077-82fe-49a9-8649-6e61bf02b06c", 00:07:57.661 "is_configured": true, 00:07:57.661 "data_offset": 0, 00:07:57.661 "data_size": 65536 00:07:57.661 }, 00:07:57.661 { 00:07:57.661 "name": "BaseBdev2", 00:07:57.661 "uuid": "30867b0e-64ed-4399-af62-e73b22756c5c", 00:07:57.661 "is_configured": true, 00:07:57.661 "data_offset": 0, 00:07:57.661 "data_size": 65536 00:07:57.661 } 00:07:57.661 ] 00:07:57.661 }' 00:07:57.661 14:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.661 14:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.231 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:58.231 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:58.231 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:58.231 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:58.231 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:58.231 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:58.231 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:58.231 14:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.231 14:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.231 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:58.231 [2024-12-09 14:40:36.074980] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:58.231 14:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.231 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:58.231 "name": "Existed_Raid", 00:07:58.231 "aliases": [ 00:07:58.231 "9af195a1-db04-421c-8916-c3f7de9c39b1" 00:07:58.231 ], 00:07:58.231 "product_name": "Raid Volume", 00:07:58.231 "block_size": 512, 00:07:58.231 "num_blocks": 65536, 00:07:58.231 "uuid": "9af195a1-db04-421c-8916-c3f7de9c39b1", 00:07:58.231 "assigned_rate_limits": { 00:07:58.231 "rw_ios_per_sec": 0, 00:07:58.231 "rw_mbytes_per_sec": 0, 00:07:58.231 "r_mbytes_per_sec": 0, 00:07:58.231 "w_mbytes_per_sec": 0 00:07:58.231 }, 00:07:58.231 "claimed": false, 00:07:58.231 "zoned": false, 00:07:58.231 "supported_io_types": { 00:07:58.231 "read": true, 00:07:58.231 "write": true, 00:07:58.231 "unmap": false, 00:07:58.231 "flush": false, 00:07:58.231 "reset": true, 00:07:58.231 "nvme_admin": false, 00:07:58.231 "nvme_io": false, 00:07:58.231 "nvme_io_md": false, 00:07:58.231 "write_zeroes": true, 00:07:58.231 "zcopy": false, 00:07:58.231 "get_zone_info": false, 00:07:58.231 "zone_management": false, 00:07:58.231 "zone_append": false, 00:07:58.231 "compare": false, 00:07:58.231 "compare_and_write": false, 00:07:58.231 "abort": false, 00:07:58.231 "seek_hole": false, 00:07:58.231 "seek_data": false, 00:07:58.231 "copy": false, 00:07:58.231 "nvme_iov_md": false 00:07:58.231 }, 00:07:58.231 "memory_domains": [ 00:07:58.231 { 00:07:58.231 "dma_device_id": "system", 00:07:58.231 "dma_device_type": 1 00:07:58.231 }, 00:07:58.231 { 00:07:58.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.231 "dma_device_type": 2 00:07:58.231 }, 00:07:58.231 { 00:07:58.231 "dma_device_id": "system", 00:07:58.231 "dma_device_type": 1 00:07:58.231 }, 00:07:58.231 { 00:07:58.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.231 "dma_device_type": 2 00:07:58.231 } 00:07:58.231 ], 00:07:58.231 "driver_specific": { 00:07:58.231 "raid": { 00:07:58.231 "uuid": "9af195a1-db04-421c-8916-c3f7de9c39b1", 00:07:58.231 "strip_size_kb": 0, 00:07:58.231 "state": "online", 00:07:58.231 "raid_level": "raid1", 00:07:58.231 "superblock": false, 00:07:58.231 "num_base_bdevs": 2, 00:07:58.231 "num_base_bdevs_discovered": 2, 00:07:58.231 "num_base_bdevs_operational": 2, 00:07:58.231 "base_bdevs_list": [ 00:07:58.231 { 00:07:58.231 "name": "BaseBdev1", 00:07:58.231 "uuid": "50996077-82fe-49a9-8649-6e61bf02b06c", 00:07:58.231 "is_configured": true, 00:07:58.231 "data_offset": 0, 00:07:58.231 "data_size": 65536 00:07:58.231 }, 00:07:58.231 { 00:07:58.231 "name": "BaseBdev2", 00:07:58.231 "uuid": "30867b0e-64ed-4399-af62-e73b22756c5c", 00:07:58.231 "is_configured": true, 00:07:58.231 "data_offset": 0, 00:07:58.231 "data_size": 65536 00:07:58.231 } 00:07:58.231 ] 00:07:58.231 } 00:07:58.231 } 00:07:58.231 }' 00:07:58.231 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:58.231 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:58.231 BaseBdev2' 00:07:58.231 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.231 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:58.231 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:58.231 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:58.231 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.231 14:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.231 14:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.231 14:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.231 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:58.231 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:58.231 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:58.231 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:58.231 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.231 14:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.231 14:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.231 14:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.231 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:58.231 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:58.231 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:58.231 14:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.231 14:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.231 [2024-12-09 14:40:36.278384] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:58.491 14:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.491 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:58.491 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:58.491 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:58.491 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:58.491 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:58.491 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:58.491 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.491 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.491 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:58.491 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:58.491 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:58.491 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.491 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.491 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.491 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.491 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.491 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.491 14:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.491 14:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.491 14:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.491 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.491 "name": "Existed_Raid", 00:07:58.491 "uuid": "9af195a1-db04-421c-8916-c3f7de9c39b1", 00:07:58.491 "strip_size_kb": 0, 00:07:58.491 "state": "online", 00:07:58.491 "raid_level": "raid1", 00:07:58.491 "superblock": false, 00:07:58.491 "num_base_bdevs": 2, 00:07:58.491 "num_base_bdevs_discovered": 1, 00:07:58.491 "num_base_bdevs_operational": 1, 00:07:58.491 "base_bdevs_list": [ 00:07:58.491 { 00:07:58.491 "name": null, 00:07:58.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.491 "is_configured": false, 00:07:58.491 "data_offset": 0, 00:07:58.491 "data_size": 65536 00:07:58.491 }, 00:07:58.491 { 00:07:58.491 "name": "BaseBdev2", 00:07:58.491 "uuid": "30867b0e-64ed-4399-af62-e73b22756c5c", 00:07:58.491 "is_configured": true, 00:07:58.491 "data_offset": 0, 00:07:58.491 "data_size": 65536 00:07:58.491 } 00:07:58.491 ] 00:07:58.491 }' 00:07:58.491 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.491 14:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.750 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:58.750 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:58.750 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.750 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:58.750 14:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.750 14:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.750 14:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.009 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:59.009 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:59.009 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:59.009 14:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.009 14:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.009 [2024-12-09 14:40:36.899986] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:59.009 [2024-12-09 14:40:36.900159] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:59.009 [2024-12-09 14:40:36.996970] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:59.009 [2024-12-09 14:40:36.997031] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:59.009 [2024-12-09 14:40:36.997044] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:59.009 14:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.009 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:59.009 14:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:59.009 14:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.009 14:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.009 14:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.009 14:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:59.009 14:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.009 14:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:59.009 14:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:59.009 14:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:59.009 14:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63990 00:07:59.009 14:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63990 ']' 00:07:59.009 14:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63990 00:07:59.009 14:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:59.009 14:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:59.009 14:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63990 00:07:59.009 killing process with pid 63990 00:07:59.009 14:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:59.009 14:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:59.009 14:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63990' 00:07:59.009 14:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63990 00:07:59.009 [2024-12-09 14:40:37.068418] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:59.010 14:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63990 00:07:59.010 [2024-12-09 14:40:37.086397] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:00.390 14:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:00.390 00:08:00.390 real 0m5.023s 00:08:00.390 user 0m7.195s 00:08:00.390 sys 0m0.803s 00:08:00.390 14:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.390 14:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.390 ************************************ 00:08:00.390 END TEST raid_state_function_test 00:08:00.390 ************************************ 00:08:00.390 14:40:38 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:00.390 14:40:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:00.390 14:40:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.390 14:40:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:00.390 ************************************ 00:08:00.390 START TEST raid_state_function_test_sb 00:08:00.390 ************************************ 00:08:00.390 14:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:08:00.390 14:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:00.390 14:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:00.390 14:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:00.390 14:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:00.390 14:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:00.390 14:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:00.390 14:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:00.390 14:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:00.390 14:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:00.390 14:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:00.390 14:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:00.390 14:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:00.390 14:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:00.390 14:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:00.390 14:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:00.390 14:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:00.390 14:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:00.390 14:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:00.390 14:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:00.390 14:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:00.390 14:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:00.390 Process raid pid: 64238 00:08:00.390 14:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:00.390 14:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64238 00:08:00.390 14:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:00.390 14:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64238' 00:08:00.390 14:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64238 00:08:00.390 14:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64238 ']' 00:08:00.390 14:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.390 14:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.390 14:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.390 14:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.390 14:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.390 [2024-12-09 14:40:38.410693] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:08:00.390 [2024-12-09 14:40:38.410922] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.650 [2024-12-09 14:40:38.588900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.650 [2024-12-09 14:40:38.709362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.909 [2024-12-09 14:40:38.925469] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.909 [2024-12-09 14:40:38.925503] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:01.170 14:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.170 14:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:01.170 14:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:01.170 14:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.170 14:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.170 [2024-12-09 14:40:39.272772] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:01.170 [2024-12-09 14:40:39.272830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:01.170 [2024-12-09 14:40:39.272840] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:01.170 [2024-12-09 14:40:39.272850] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:01.170 14:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.170 14:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:01.170 14:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.170 14:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:01.170 14:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:01.170 14:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:01.170 14:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:01.170 14:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.170 14:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.170 14:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.170 14:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.170 14:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.170 14:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.170 14:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.170 14:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.432 14:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.432 14:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.432 "name": "Existed_Raid", 00:08:01.432 "uuid": "97dd590e-c2b0-4924-8b16-72fc621784ed", 00:08:01.432 "strip_size_kb": 0, 00:08:01.432 "state": "configuring", 00:08:01.432 "raid_level": "raid1", 00:08:01.432 "superblock": true, 00:08:01.432 "num_base_bdevs": 2, 00:08:01.432 "num_base_bdevs_discovered": 0, 00:08:01.432 "num_base_bdevs_operational": 2, 00:08:01.432 "base_bdevs_list": [ 00:08:01.432 { 00:08:01.432 "name": "BaseBdev1", 00:08:01.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.432 "is_configured": false, 00:08:01.432 "data_offset": 0, 00:08:01.432 "data_size": 0 00:08:01.432 }, 00:08:01.432 { 00:08:01.432 "name": "BaseBdev2", 00:08:01.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.432 "is_configured": false, 00:08:01.432 "data_offset": 0, 00:08:01.432 "data_size": 0 00:08:01.432 } 00:08:01.432 ] 00:08:01.432 }' 00:08:01.432 14:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.432 14:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.691 14:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:01.691 14:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.691 14:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.691 [2024-12-09 14:40:39.735947] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:01.691 [2024-12-09 14:40:39.736056] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:01.691 14:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.691 14:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:01.691 14:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.691 14:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.691 [2024-12-09 14:40:39.747927] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:01.691 [2024-12-09 14:40:39.748019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:01.691 [2024-12-09 14:40:39.748047] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:01.691 [2024-12-09 14:40:39.748074] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:01.691 14:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.691 14:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:01.691 14:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.691 14:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.691 [2024-12-09 14:40:39.796761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:01.691 BaseBdev1 00:08:01.691 14:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.691 14:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:01.691 14:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:01.691 14:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:01.691 14:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:01.691 14:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:01.691 14:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:01.691 14:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:01.691 14:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.691 14:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.691 14:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.691 14:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:01.691 14:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.691 14:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.951 [ 00:08:01.951 { 00:08:01.951 "name": "BaseBdev1", 00:08:01.951 "aliases": [ 00:08:01.951 "7ba344ad-eb8e-44d8-9929-0b92ea924c96" 00:08:01.951 ], 00:08:01.951 "product_name": "Malloc disk", 00:08:01.951 "block_size": 512, 00:08:01.951 "num_blocks": 65536, 00:08:01.951 "uuid": "7ba344ad-eb8e-44d8-9929-0b92ea924c96", 00:08:01.951 "assigned_rate_limits": { 00:08:01.951 "rw_ios_per_sec": 0, 00:08:01.951 "rw_mbytes_per_sec": 0, 00:08:01.951 "r_mbytes_per_sec": 0, 00:08:01.951 "w_mbytes_per_sec": 0 00:08:01.951 }, 00:08:01.951 "claimed": true, 00:08:01.951 "claim_type": "exclusive_write", 00:08:01.951 "zoned": false, 00:08:01.951 "supported_io_types": { 00:08:01.951 "read": true, 00:08:01.951 "write": true, 00:08:01.951 "unmap": true, 00:08:01.951 "flush": true, 00:08:01.951 "reset": true, 00:08:01.951 "nvme_admin": false, 00:08:01.951 "nvme_io": false, 00:08:01.951 "nvme_io_md": false, 00:08:01.951 "write_zeroes": true, 00:08:01.951 "zcopy": true, 00:08:01.951 "get_zone_info": false, 00:08:01.951 "zone_management": false, 00:08:01.951 "zone_append": false, 00:08:01.951 "compare": false, 00:08:01.951 "compare_and_write": false, 00:08:01.951 "abort": true, 00:08:01.951 "seek_hole": false, 00:08:01.951 "seek_data": false, 00:08:01.951 "copy": true, 00:08:01.951 "nvme_iov_md": false 00:08:01.951 }, 00:08:01.951 "memory_domains": [ 00:08:01.951 { 00:08:01.951 "dma_device_id": "system", 00:08:01.951 "dma_device_type": 1 00:08:01.951 }, 00:08:01.951 { 00:08:01.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.951 "dma_device_type": 2 00:08:01.951 } 00:08:01.951 ], 00:08:01.951 "driver_specific": {} 00:08:01.951 } 00:08:01.951 ] 00:08:01.951 14:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.951 14:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:01.951 14:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:01.951 14:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.951 14:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:01.951 14:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:01.951 14:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:01.951 14:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:01.951 14:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.951 14:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.951 14:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.951 14:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.951 14:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.951 14:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.951 14:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.951 14:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.951 14:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.951 14:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.951 "name": "Existed_Raid", 00:08:01.951 "uuid": "240e14ad-450f-4d8f-82f2-1e24e0d60334", 00:08:01.951 "strip_size_kb": 0, 00:08:01.951 "state": "configuring", 00:08:01.951 "raid_level": "raid1", 00:08:01.951 "superblock": true, 00:08:01.951 "num_base_bdevs": 2, 00:08:01.951 "num_base_bdevs_discovered": 1, 00:08:01.951 "num_base_bdevs_operational": 2, 00:08:01.951 "base_bdevs_list": [ 00:08:01.951 { 00:08:01.951 "name": "BaseBdev1", 00:08:01.952 "uuid": "7ba344ad-eb8e-44d8-9929-0b92ea924c96", 00:08:01.952 "is_configured": true, 00:08:01.952 "data_offset": 2048, 00:08:01.952 "data_size": 63488 00:08:01.952 }, 00:08:01.952 { 00:08:01.952 "name": "BaseBdev2", 00:08:01.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.952 "is_configured": false, 00:08:01.952 "data_offset": 0, 00:08:01.952 "data_size": 0 00:08:01.952 } 00:08:01.952 ] 00:08:01.952 }' 00:08:01.952 14:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.952 14:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.211 14:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:02.211 14:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.211 14:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.211 [2024-12-09 14:40:40.295992] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:02.211 [2024-12-09 14:40:40.296050] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:02.211 14:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.211 14:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:02.211 14:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.211 14:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.211 [2024-12-09 14:40:40.308000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:02.211 [2024-12-09 14:40:40.309810] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:02.211 [2024-12-09 14:40:40.309898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:02.211 14:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.211 14:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:02.211 14:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:02.211 14:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:02.211 14:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.211 14:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.211 14:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:02.211 14:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:02.211 14:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.211 14:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.211 14:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.211 14:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.211 14:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.211 14:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.211 14:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.211 14:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.211 14:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.471 14:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.471 14:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.471 "name": "Existed_Raid", 00:08:02.471 "uuid": "a0f3436e-0f82-46db-872f-5cd76b04643c", 00:08:02.471 "strip_size_kb": 0, 00:08:02.471 "state": "configuring", 00:08:02.471 "raid_level": "raid1", 00:08:02.471 "superblock": true, 00:08:02.471 "num_base_bdevs": 2, 00:08:02.471 "num_base_bdevs_discovered": 1, 00:08:02.471 "num_base_bdevs_operational": 2, 00:08:02.471 "base_bdevs_list": [ 00:08:02.471 { 00:08:02.471 "name": "BaseBdev1", 00:08:02.471 "uuid": "7ba344ad-eb8e-44d8-9929-0b92ea924c96", 00:08:02.471 "is_configured": true, 00:08:02.471 "data_offset": 2048, 00:08:02.471 "data_size": 63488 00:08:02.471 }, 00:08:02.471 { 00:08:02.471 "name": "BaseBdev2", 00:08:02.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.471 "is_configured": false, 00:08:02.471 "data_offset": 0, 00:08:02.471 "data_size": 0 00:08:02.471 } 00:08:02.471 ] 00:08:02.471 }' 00:08:02.471 14:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.471 14:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.730 14:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:02.730 14:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.730 14:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.730 [2024-12-09 14:40:40.802999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:02.730 [2024-12-09 14:40:40.803369] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:02.730 [2024-12-09 14:40:40.803426] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:02.730 [2024-12-09 14:40:40.803717] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:02.730 [2024-12-09 14:40:40.803918] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:02.730 [2024-12-09 14:40:40.803968] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:02.730 [2024-12-09 14:40:40.804169] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:02.730 BaseBdev2 00:08:02.730 14:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.730 14:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:02.730 14:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:02.730 14:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:02.730 14:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:02.730 14:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:02.730 14:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:02.730 14:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:02.730 14:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.730 14:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.730 14:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.730 14:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:02.730 14:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.730 14:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.730 [ 00:08:02.730 { 00:08:02.730 "name": "BaseBdev2", 00:08:02.730 "aliases": [ 00:08:02.730 "ef3085ae-0a23-4bda-8424-a27bd1f72e10" 00:08:02.730 ], 00:08:02.730 "product_name": "Malloc disk", 00:08:02.730 "block_size": 512, 00:08:02.730 "num_blocks": 65536, 00:08:02.730 "uuid": "ef3085ae-0a23-4bda-8424-a27bd1f72e10", 00:08:02.730 "assigned_rate_limits": { 00:08:02.730 "rw_ios_per_sec": 0, 00:08:02.730 "rw_mbytes_per_sec": 0, 00:08:02.730 "r_mbytes_per_sec": 0, 00:08:02.730 "w_mbytes_per_sec": 0 00:08:02.730 }, 00:08:02.730 "claimed": true, 00:08:02.730 "claim_type": "exclusive_write", 00:08:02.730 "zoned": false, 00:08:02.730 "supported_io_types": { 00:08:02.730 "read": true, 00:08:02.730 "write": true, 00:08:02.730 "unmap": true, 00:08:02.730 "flush": true, 00:08:02.730 "reset": true, 00:08:02.730 "nvme_admin": false, 00:08:02.730 "nvme_io": false, 00:08:02.730 "nvme_io_md": false, 00:08:02.730 "write_zeroes": true, 00:08:02.730 "zcopy": true, 00:08:02.730 "get_zone_info": false, 00:08:02.730 "zone_management": false, 00:08:02.730 "zone_append": false, 00:08:02.730 "compare": false, 00:08:02.730 "compare_and_write": false, 00:08:02.730 "abort": true, 00:08:02.730 "seek_hole": false, 00:08:02.730 "seek_data": false, 00:08:02.730 "copy": true, 00:08:02.730 "nvme_iov_md": false 00:08:02.730 }, 00:08:02.730 "memory_domains": [ 00:08:02.730 { 00:08:02.730 "dma_device_id": "system", 00:08:02.730 "dma_device_type": 1 00:08:02.730 }, 00:08:02.730 { 00:08:02.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.730 "dma_device_type": 2 00:08:02.730 } 00:08:02.730 ], 00:08:02.730 "driver_specific": {} 00:08:02.730 } 00:08:02.730 ] 00:08:02.730 14:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.730 14:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:02.730 14:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:02.730 14:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:02.730 14:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:02.730 14:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.730 14:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:02.731 14:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:02.731 14:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:02.731 14:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.731 14:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.731 14:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.731 14:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.731 14:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.990 14:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.990 14:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.990 14:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.990 14:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.990 14:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.990 14:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.990 "name": "Existed_Raid", 00:08:02.990 "uuid": "a0f3436e-0f82-46db-872f-5cd76b04643c", 00:08:02.990 "strip_size_kb": 0, 00:08:02.990 "state": "online", 00:08:02.990 "raid_level": "raid1", 00:08:02.990 "superblock": true, 00:08:02.990 "num_base_bdevs": 2, 00:08:02.990 "num_base_bdevs_discovered": 2, 00:08:02.990 "num_base_bdevs_operational": 2, 00:08:02.990 "base_bdevs_list": [ 00:08:02.990 { 00:08:02.990 "name": "BaseBdev1", 00:08:02.990 "uuid": "7ba344ad-eb8e-44d8-9929-0b92ea924c96", 00:08:02.990 "is_configured": true, 00:08:02.990 "data_offset": 2048, 00:08:02.990 "data_size": 63488 00:08:02.990 }, 00:08:02.990 { 00:08:02.990 "name": "BaseBdev2", 00:08:02.990 "uuid": "ef3085ae-0a23-4bda-8424-a27bd1f72e10", 00:08:02.990 "is_configured": true, 00:08:02.990 "data_offset": 2048, 00:08:02.990 "data_size": 63488 00:08:02.990 } 00:08:02.990 ] 00:08:02.990 }' 00:08:02.990 14:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.990 14:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.250 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:03.250 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:03.250 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:03.250 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:03.250 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:03.250 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:03.250 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:03.250 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:03.250 14:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.250 14:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.250 [2024-12-09 14:40:41.310483] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:03.250 14:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.250 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:03.250 "name": "Existed_Raid", 00:08:03.250 "aliases": [ 00:08:03.250 "a0f3436e-0f82-46db-872f-5cd76b04643c" 00:08:03.250 ], 00:08:03.250 "product_name": "Raid Volume", 00:08:03.250 "block_size": 512, 00:08:03.250 "num_blocks": 63488, 00:08:03.250 "uuid": "a0f3436e-0f82-46db-872f-5cd76b04643c", 00:08:03.250 "assigned_rate_limits": { 00:08:03.250 "rw_ios_per_sec": 0, 00:08:03.250 "rw_mbytes_per_sec": 0, 00:08:03.250 "r_mbytes_per_sec": 0, 00:08:03.250 "w_mbytes_per_sec": 0 00:08:03.250 }, 00:08:03.250 "claimed": false, 00:08:03.250 "zoned": false, 00:08:03.250 "supported_io_types": { 00:08:03.250 "read": true, 00:08:03.250 "write": true, 00:08:03.250 "unmap": false, 00:08:03.250 "flush": false, 00:08:03.250 "reset": true, 00:08:03.250 "nvme_admin": false, 00:08:03.250 "nvme_io": false, 00:08:03.250 "nvme_io_md": false, 00:08:03.250 "write_zeroes": true, 00:08:03.250 "zcopy": false, 00:08:03.250 "get_zone_info": false, 00:08:03.250 "zone_management": false, 00:08:03.250 "zone_append": false, 00:08:03.250 "compare": false, 00:08:03.250 "compare_and_write": false, 00:08:03.250 "abort": false, 00:08:03.250 "seek_hole": false, 00:08:03.250 "seek_data": false, 00:08:03.250 "copy": false, 00:08:03.250 "nvme_iov_md": false 00:08:03.250 }, 00:08:03.250 "memory_domains": [ 00:08:03.250 { 00:08:03.250 "dma_device_id": "system", 00:08:03.250 "dma_device_type": 1 00:08:03.250 }, 00:08:03.250 { 00:08:03.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.250 "dma_device_type": 2 00:08:03.250 }, 00:08:03.250 { 00:08:03.250 "dma_device_id": "system", 00:08:03.250 "dma_device_type": 1 00:08:03.250 }, 00:08:03.250 { 00:08:03.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.250 "dma_device_type": 2 00:08:03.250 } 00:08:03.250 ], 00:08:03.250 "driver_specific": { 00:08:03.251 "raid": { 00:08:03.251 "uuid": "a0f3436e-0f82-46db-872f-5cd76b04643c", 00:08:03.251 "strip_size_kb": 0, 00:08:03.251 "state": "online", 00:08:03.251 "raid_level": "raid1", 00:08:03.251 "superblock": true, 00:08:03.251 "num_base_bdevs": 2, 00:08:03.251 "num_base_bdevs_discovered": 2, 00:08:03.251 "num_base_bdevs_operational": 2, 00:08:03.251 "base_bdevs_list": [ 00:08:03.251 { 00:08:03.251 "name": "BaseBdev1", 00:08:03.251 "uuid": "7ba344ad-eb8e-44d8-9929-0b92ea924c96", 00:08:03.251 "is_configured": true, 00:08:03.251 "data_offset": 2048, 00:08:03.251 "data_size": 63488 00:08:03.251 }, 00:08:03.251 { 00:08:03.251 "name": "BaseBdev2", 00:08:03.251 "uuid": "ef3085ae-0a23-4bda-8424-a27bd1f72e10", 00:08:03.251 "is_configured": true, 00:08:03.251 "data_offset": 2048, 00:08:03.251 "data_size": 63488 00:08:03.251 } 00:08:03.251 ] 00:08:03.251 } 00:08:03.251 } 00:08:03.251 }' 00:08:03.251 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:03.510 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:03.510 BaseBdev2' 00:08:03.510 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.510 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:03.510 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:03.510 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:03.510 14:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.510 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.510 14:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.510 14:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.510 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:03.510 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:03.510 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:03.510 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:03.510 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.510 14:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.510 14:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.510 14:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.510 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:03.510 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:03.510 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:03.510 14:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.510 14:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.510 [2024-12-09 14:40:41.529917] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:03.770 14:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.770 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:03.770 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:03.770 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:03.770 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:03.770 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:03.770 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:03.770 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.770 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:03.770 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:03.770 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:03.770 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:03.770 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.770 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.770 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.770 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.770 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.770 14:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.770 14:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.770 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.770 14:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.770 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.770 "name": "Existed_Raid", 00:08:03.770 "uuid": "a0f3436e-0f82-46db-872f-5cd76b04643c", 00:08:03.770 "strip_size_kb": 0, 00:08:03.770 "state": "online", 00:08:03.770 "raid_level": "raid1", 00:08:03.770 "superblock": true, 00:08:03.770 "num_base_bdevs": 2, 00:08:03.770 "num_base_bdevs_discovered": 1, 00:08:03.770 "num_base_bdevs_operational": 1, 00:08:03.770 "base_bdevs_list": [ 00:08:03.770 { 00:08:03.770 "name": null, 00:08:03.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.771 "is_configured": false, 00:08:03.771 "data_offset": 0, 00:08:03.771 "data_size": 63488 00:08:03.771 }, 00:08:03.771 { 00:08:03.771 "name": "BaseBdev2", 00:08:03.771 "uuid": "ef3085ae-0a23-4bda-8424-a27bd1f72e10", 00:08:03.771 "is_configured": true, 00:08:03.771 "data_offset": 2048, 00:08:03.771 "data_size": 63488 00:08:03.771 } 00:08:03.771 ] 00:08:03.771 }' 00:08:03.771 14:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.771 14:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.030 14:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:04.030 14:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:04.030 14:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.030 14:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.030 14:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.030 14:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:04.030 14:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.030 14:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:04.030 14:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:04.030 14:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:04.030 14:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.030 14:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.030 [2024-12-09 14:40:42.064940] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:04.030 [2024-12-09 14:40:42.065047] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:04.290 [2024-12-09 14:40:42.160214] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:04.290 [2024-12-09 14:40:42.160364] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:04.290 [2024-12-09 14:40:42.160410] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:04.290 14:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.290 14:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:04.290 14:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:04.290 14:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.290 14:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:04.290 14:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.290 14:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.290 14:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.290 14:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:04.290 14:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:04.290 14:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:04.290 14:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64238 00:08:04.290 14:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64238 ']' 00:08:04.290 14:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64238 00:08:04.290 14:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:04.290 14:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:04.290 14:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64238 00:08:04.290 14:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:04.290 14:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:04.290 14:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64238' 00:08:04.290 killing process with pid 64238 00:08:04.290 14:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64238 00:08:04.290 [2024-12-09 14:40:42.235681] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:04.290 14:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64238 00:08:04.290 [2024-12-09 14:40:42.253369] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:05.667 14:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:05.667 ************************************ 00:08:05.667 END TEST raid_state_function_test_sb 00:08:05.667 ************************************ 00:08:05.667 00:08:05.667 real 0m5.083s 00:08:05.667 user 0m7.325s 00:08:05.667 sys 0m0.788s 00:08:05.667 14:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.667 14:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.667 14:40:43 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:05.667 14:40:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:05.667 14:40:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.667 14:40:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:05.667 ************************************ 00:08:05.667 START TEST raid_superblock_test 00:08:05.667 ************************************ 00:08:05.667 14:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:08:05.667 14:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:05.667 14:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:05.667 14:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:05.667 14:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:05.667 14:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:05.667 14:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:05.667 14:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:05.667 14:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:05.667 14:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:05.667 14:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:05.667 14:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:05.667 14:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:05.667 14:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:05.667 14:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:05.667 14:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:05.667 14:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=64490 00:08:05.667 14:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:05.667 14:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 64490 00:08:05.667 14:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 64490 ']' 00:08:05.667 14:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.667 14:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:05.667 14:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.667 14:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:05.667 14:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.667 [2024-12-09 14:40:43.546890] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:08:05.667 [2024-12-09 14:40:43.547100] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64490 ] 00:08:05.667 [2024-12-09 14:40:43.720522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.927 [2024-12-09 14:40:43.844052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.186 [2024-12-09 14:40:44.048779] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:06.186 [2024-12-09 14:40:44.048817] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:06.445 14:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:06.445 14:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:06.445 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.446 malloc1 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.446 [2024-12-09 14:40:44.432274] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:06.446 [2024-12-09 14:40:44.432384] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:06.446 [2024-12-09 14:40:44.432427] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:06.446 [2024-12-09 14:40:44.432468] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:06.446 [2024-12-09 14:40:44.434656] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:06.446 [2024-12-09 14:40:44.434725] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:06.446 pt1 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.446 malloc2 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.446 [2024-12-09 14:40:44.492809] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:06.446 [2024-12-09 14:40:44.492867] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:06.446 [2024-12-09 14:40:44.492893] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:06.446 [2024-12-09 14:40:44.492902] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:06.446 [2024-12-09 14:40:44.494960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:06.446 [2024-12-09 14:40:44.494998] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:06.446 pt2 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.446 [2024-12-09 14:40:44.504826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:06.446 [2024-12-09 14:40:44.506635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:06.446 [2024-12-09 14:40:44.506799] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:06.446 [2024-12-09 14:40:44.506816] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:06.446 [2024-12-09 14:40:44.507056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:06.446 [2024-12-09 14:40:44.507206] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:06.446 [2024-12-09 14:40:44.507221] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:06.446 [2024-12-09 14:40:44.507360] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.446 "name": "raid_bdev1", 00:08:06.446 "uuid": "f4cc1c4b-1a00-4573-b7f9-915ccb8bb9c7", 00:08:06.446 "strip_size_kb": 0, 00:08:06.446 "state": "online", 00:08:06.446 "raid_level": "raid1", 00:08:06.446 "superblock": true, 00:08:06.446 "num_base_bdevs": 2, 00:08:06.446 "num_base_bdevs_discovered": 2, 00:08:06.446 "num_base_bdevs_operational": 2, 00:08:06.446 "base_bdevs_list": [ 00:08:06.446 { 00:08:06.446 "name": "pt1", 00:08:06.446 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:06.446 "is_configured": true, 00:08:06.446 "data_offset": 2048, 00:08:06.446 "data_size": 63488 00:08:06.446 }, 00:08:06.446 { 00:08:06.446 "name": "pt2", 00:08:06.446 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:06.446 "is_configured": true, 00:08:06.446 "data_offset": 2048, 00:08:06.446 "data_size": 63488 00:08:06.446 } 00:08:06.446 ] 00:08:06.446 }' 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.446 14:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.014 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:07.014 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:07.014 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:07.014 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:07.014 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:07.014 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:07.014 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:07.014 14:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.014 14:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.014 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:07.015 [2024-12-09 14:40:44.968309] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:07.015 14:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.015 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:07.015 "name": "raid_bdev1", 00:08:07.015 "aliases": [ 00:08:07.015 "f4cc1c4b-1a00-4573-b7f9-915ccb8bb9c7" 00:08:07.015 ], 00:08:07.015 "product_name": "Raid Volume", 00:08:07.015 "block_size": 512, 00:08:07.015 "num_blocks": 63488, 00:08:07.015 "uuid": "f4cc1c4b-1a00-4573-b7f9-915ccb8bb9c7", 00:08:07.015 "assigned_rate_limits": { 00:08:07.015 "rw_ios_per_sec": 0, 00:08:07.015 "rw_mbytes_per_sec": 0, 00:08:07.015 "r_mbytes_per_sec": 0, 00:08:07.015 "w_mbytes_per_sec": 0 00:08:07.015 }, 00:08:07.015 "claimed": false, 00:08:07.015 "zoned": false, 00:08:07.015 "supported_io_types": { 00:08:07.015 "read": true, 00:08:07.015 "write": true, 00:08:07.015 "unmap": false, 00:08:07.015 "flush": false, 00:08:07.015 "reset": true, 00:08:07.015 "nvme_admin": false, 00:08:07.015 "nvme_io": false, 00:08:07.015 "nvme_io_md": false, 00:08:07.015 "write_zeroes": true, 00:08:07.015 "zcopy": false, 00:08:07.015 "get_zone_info": false, 00:08:07.015 "zone_management": false, 00:08:07.015 "zone_append": false, 00:08:07.015 "compare": false, 00:08:07.015 "compare_and_write": false, 00:08:07.015 "abort": false, 00:08:07.015 "seek_hole": false, 00:08:07.015 "seek_data": false, 00:08:07.015 "copy": false, 00:08:07.015 "nvme_iov_md": false 00:08:07.015 }, 00:08:07.015 "memory_domains": [ 00:08:07.015 { 00:08:07.015 "dma_device_id": "system", 00:08:07.015 "dma_device_type": 1 00:08:07.015 }, 00:08:07.015 { 00:08:07.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.015 "dma_device_type": 2 00:08:07.015 }, 00:08:07.015 { 00:08:07.015 "dma_device_id": "system", 00:08:07.015 "dma_device_type": 1 00:08:07.015 }, 00:08:07.015 { 00:08:07.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.015 "dma_device_type": 2 00:08:07.015 } 00:08:07.015 ], 00:08:07.015 "driver_specific": { 00:08:07.015 "raid": { 00:08:07.015 "uuid": "f4cc1c4b-1a00-4573-b7f9-915ccb8bb9c7", 00:08:07.015 "strip_size_kb": 0, 00:08:07.015 "state": "online", 00:08:07.015 "raid_level": "raid1", 00:08:07.015 "superblock": true, 00:08:07.015 "num_base_bdevs": 2, 00:08:07.015 "num_base_bdevs_discovered": 2, 00:08:07.015 "num_base_bdevs_operational": 2, 00:08:07.015 "base_bdevs_list": [ 00:08:07.015 { 00:08:07.015 "name": "pt1", 00:08:07.015 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:07.015 "is_configured": true, 00:08:07.015 "data_offset": 2048, 00:08:07.015 "data_size": 63488 00:08:07.015 }, 00:08:07.015 { 00:08:07.015 "name": "pt2", 00:08:07.015 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:07.015 "is_configured": true, 00:08:07.015 "data_offset": 2048, 00:08:07.015 "data_size": 63488 00:08:07.015 } 00:08:07.015 ] 00:08:07.015 } 00:08:07.015 } 00:08:07.015 }' 00:08:07.015 14:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:07.015 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:07.015 pt2' 00:08:07.015 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.015 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:07.015 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:07.015 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:07.015 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.015 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.015 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.015 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.015 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:07.015 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:07.015 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:07.015 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:07.015 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.015 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.015 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.274 [2024-12-09 14:40:45.187961] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f4cc1c4b-1a00-4573-b7f9-915ccb8bb9c7 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f4cc1c4b-1a00-4573-b7f9-915ccb8bb9c7 ']' 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.274 [2024-12-09 14:40:45.227603] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:07.274 [2024-12-09 14:40:45.227646] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:07.274 [2024-12-09 14:40:45.227739] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:07.274 [2024-12-09 14:40:45.227807] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:07.274 [2024-12-09 14:40:45.227822] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.274 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.274 [2024-12-09 14:40:45.367337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:07.274 [2024-12-09 14:40:45.369170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:07.274 [2024-12-09 14:40:45.369236] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:07.275 [2024-12-09 14:40:45.369285] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:07.275 [2024-12-09 14:40:45.369300] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:07.275 [2024-12-09 14:40:45.369309] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:07.275 request: 00:08:07.275 { 00:08:07.275 "name": "raid_bdev1", 00:08:07.275 "raid_level": "raid1", 00:08:07.275 "base_bdevs": [ 00:08:07.275 "malloc1", 00:08:07.275 "malloc2" 00:08:07.275 ], 00:08:07.275 "superblock": false, 00:08:07.275 "method": "bdev_raid_create", 00:08:07.275 "req_id": 1 00:08:07.275 } 00:08:07.275 Got JSON-RPC error response 00:08:07.275 response: 00:08:07.275 { 00:08:07.275 "code": -17, 00:08:07.275 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:07.275 } 00:08:07.275 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:07.275 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:07.275 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:07.275 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:07.275 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:07.275 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.275 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.275 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.275 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:07.275 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.533 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:07.533 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:07.533 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:07.533 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.533 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.533 [2024-12-09 14:40:45.431212] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:07.533 [2024-12-09 14:40:45.431305] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:07.533 [2024-12-09 14:40:45.431343] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:07.533 [2024-12-09 14:40:45.431374] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:07.533 [2024-12-09 14:40:45.433436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:07.533 [2024-12-09 14:40:45.433508] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:07.533 [2024-12-09 14:40:45.433616] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:07.533 [2024-12-09 14:40:45.433693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:07.533 pt1 00:08:07.533 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.533 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:07.533 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:07.533 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:07.533 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:07.533 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:07.533 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:07.533 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.533 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.533 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.533 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.533 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.533 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.533 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.533 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:07.533 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.533 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.533 "name": "raid_bdev1", 00:08:07.533 "uuid": "f4cc1c4b-1a00-4573-b7f9-915ccb8bb9c7", 00:08:07.533 "strip_size_kb": 0, 00:08:07.533 "state": "configuring", 00:08:07.533 "raid_level": "raid1", 00:08:07.533 "superblock": true, 00:08:07.533 "num_base_bdevs": 2, 00:08:07.533 "num_base_bdevs_discovered": 1, 00:08:07.533 "num_base_bdevs_operational": 2, 00:08:07.533 "base_bdevs_list": [ 00:08:07.533 { 00:08:07.533 "name": "pt1", 00:08:07.533 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:07.533 "is_configured": true, 00:08:07.533 "data_offset": 2048, 00:08:07.533 "data_size": 63488 00:08:07.533 }, 00:08:07.533 { 00:08:07.533 "name": null, 00:08:07.533 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:07.533 "is_configured": false, 00:08:07.533 "data_offset": 2048, 00:08:07.533 "data_size": 63488 00:08:07.533 } 00:08:07.533 ] 00:08:07.533 }' 00:08:07.533 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.533 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.790 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:07.790 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:07.790 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:07.790 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:07.790 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.790 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.790 [2024-12-09 14:40:45.910513] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:07.790 [2024-12-09 14:40:45.910670] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:07.790 [2024-12-09 14:40:45.910731] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:07.790 [2024-12-09 14:40:45.910771] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:07.790 [2024-12-09 14:40:45.911351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:07.790 [2024-12-09 14:40:45.911445] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:07.790 [2024-12-09 14:40:45.911583] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:08.049 [2024-12-09 14:40:45.911646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:08.049 [2024-12-09 14:40:45.911793] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:08.049 [2024-12-09 14:40:45.911807] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:08.049 [2024-12-09 14:40:45.912094] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:08.049 [2024-12-09 14:40:45.912283] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:08.049 [2024-12-09 14:40:45.912294] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:08.049 [2024-12-09 14:40:45.912464] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:08.049 pt2 00:08:08.049 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.049 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:08.049 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:08.049 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:08.049 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:08.049 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:08.049 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:08.049 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:08.049 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.049 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.049 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.049 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.049 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.049 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:08.049 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.049 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.049 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.049 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.049 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.049 "name": "raid_bdev1", 00:08:08.049 "uuid": "f4cc1c4b-1a00-4573-b7f9-915ccb8bb9c7", 00:08:08.049 "strip_size_kb": 0, 00:08:08.049 "state": "online", 00:08:08.049 "raid_level": "raid1", 00:08:08.049 "superblock": true, 00:08:08.049 "num_base_bdevs": 2, 00:08:08.049 "num_base_bdevs_discovered": 2, 00:08:08.049 "num_base_bdevs_operational": 2, 00:08:08.049 "base_bdevs_list": [ 00:08:08.049 { 00:08:08.049 "name": "pt1", 00:08:08.049 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:08.049 "is_configured": true, 00:08:08.049 "data_offset": 2048, 00:08:08.049 "data_size": 63488 00:08:08.049 }, 00:08:08.049 { 00:08:08.049 "name": "pt2", 00:08:08.049 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:08.049 "is_configured": true, 00:08:08.049 "data_offset": 2048, 00:08:08.049 "data_size": 63488 00:08:08.049 } 00:08:08.049 ] 00:08:08.049 }' 00:08:08.049 14:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.049 14:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.307 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:08.307 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:08.307 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:08.307 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:08.307 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:08.307 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:08.307 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:08.307 14:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.307 14:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.307 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:08.307 [2024-12-09 14:40:46.326098] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:08.307 14:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.307 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:08.307 "name": "raid_bdev1", 00:08:08.307 "aliases": [ 00:08:08.307 "f4cc1c4b-1a00-4573-b7f9-915ccb8bb9c7" 00:08:08.307 ], 00:08:08.307 "product_name": "Raid Volume", 00:08:08.307 "block_size": 512, 00:08:08.307 "num_blocks": 63488, 00:08:08.307 "uuid": "f4cc1c4b-1a00-4573-b7f9-915ccb8bb9c7", 00:08:08.307 "assigned_rate_limits": { 00:08:08.307 "rw_ios_per_sec": 0, 00:08:08.307 "rw_mbytes_per_sec": 0, 00:08:08.307 "r_mbytes_per_sec": 0, 00:08:08.307 "w_mbytes_per_sec": 0 00:08:08.307 }, 00:08:08.307 "claimed": false, 00:08:08.307 "zoned": false, 00:08:08.307 "supported_io_types": { 00:08:08.307 "read": true, 00:08:08.307 "write": true, 00:08:08.307 "unmap": false, 00:08:08.307 "flush": false, 00:08:08.307 "reset": true, 00:08:08.307 "nvme_admin": false, 00:08:08.307 "nvme_io": false, 00:08:08.307 "nvme_io_md": false, 00:08:08.307 "write_zeroes": true, 00:08:08.307 "zcopy": false, 00:08:08.307 "get_zone_info": false, 00:08:08.307 "zone_management": false, 00:08:08.307 "zone_append": false, 00:08:08.307 "compare": false, 00:08:08.307 "compare_and_write": false, 00:08:08.307 "abort": false, 00:08:08.307 "seek_hole": false, 00:08:08.307 "seek_data": false, 00:08:08.307 "copy": false, 00:08:08.307 "nvme_iov_md": false 00:08:08.307 }, 00:08:08.307 "memory_domains": [ 00:08:08.307 { 00:08:08.307 "dma_device_id": "system", 00:08:08.307 "dma_device_type": 1 00:08:08.307 }, 00:08:08.307 { 00:08:08.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.307 "dma_device_type": 2 00:08:08.307 }, 00:08:08.307 { 00:08:08.307 "dma_device_id": "system", 00:08:08.307 "dma_device_type": 1 00:08:08.307 }, 00:08:08.307 { 00:08:08.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.307 "dma_device_type": 2 00:08:08.307 } 00:08:08.307 ], 00:08:08.307 "driver_specific": { 00:08:08.307 "raid": { 00:08:08.307 "uuid": "f4cc1c4b-1a00-4573-b7f9-915ccb8bb9c7", 00:08:08.307 "strip_size_kb": 0, 00:08:08.307 "state": "online", 00:08:08.307 "raid_level": "raid1", 00:08:08.307 "superblock": true, 00:08:08.307 "num_base_bdevs": 2, 00:08:08.307 "num_base_bdevs_discovered": 2, 00:08:08.307 "num_base_bdevs_operational": 2, 00:08:08.307 "base_bdevs_list": [ 00:08:08.307 { 00:08:08.307 "name": "pt1", 00:08:08.307 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:08.307 "is_configured": true, 00:08:08.307 "data_offset": 2048, 00:08:08.307 "data_size": 63488 00:08:08.307 }, 00:08:08.307 { 00:08:08.307 "name": "pt2", 00:08:08.307 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:08.307 "is_configured": true, 00:08:08.307 "data_offset": 2048, 00:08:08.307 "data_size": 63488 00:08:08.307 } 00:08:08.307 ] 00:08:08.307 } 00:08:08.307 } 00:08:08.307 }' 00:08:08.308 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:08.308 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:08.308 pt2' 00:08:08.308 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.567 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:08.567 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:08.567 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:08.567 14:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.567 14:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.567 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.567 14:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.567 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:08.567 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:08.567 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:08.567 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.567 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:08.567 14:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.567 14:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.567 14:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.567 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:08.567 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:08.567 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:08.567 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:08.567 14:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.567 14:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.567 [2024-12-09 14:40:46.533736] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:08.567 14:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.567 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f4cc1c4b-1a00-4573-b7f9-915ccb8bb9c7 '!=' f4cc1c4b-1a00-4573-b7f9-915ccb8bb9c7 ']' 00:08:08.567 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:08.567 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:08.567 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:08.568 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:08.568 14:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.568 14:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.568 [2024-12-09 14:40:46.577429] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:08.568 14:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.568 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:08.568 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:08.568 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:08.568 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:08.568 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:08.568 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:08.568 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.568 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.568 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.568 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.568 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.568 14:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.568 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:08.568 14:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.568 14:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.568 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.568 "name": "raid_bdev1", 00:08:08.568 "uuid": "f4cc1c4b-1a00-4573-b7f9-915ccb8bb9c7", 00:08:08.568 "strip_size_kb": 0, 00:08:08.568 "state": "online", 00:08:08.568 "raid_level": "raid1", 00:08:08.568 "superblock": true, 00:08:08.568 "num_base_bdevs": 2, 00:08:08.568 "num_base_bdevs_discovered": 1, 00:08:08.568 "num_base_bdevs_operational": 1, 00:08:08.568 "base_bdevs_list": [ 00:08:08.568 { 00:08:08.568 "name": null, 00:08:08.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.568 "is_configured": false, 00:08:08.568 "data_offset": 0, 00:08:08.568 "data_size": 63488 00:08:08.568 }, 00:08:08.568 { 00:08:08.568 "name": "pt2", 00:08:08.568 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:08.568 "is_configured": true, 00:08:08.568 "data_offset": 2048, 00:08:08.568 "data_size": 63488 00:08:08.568 } 00:08:08.568 ] 00:08:08.568 }' 00:08:08.568 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.568 14:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.136 14:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:09.136 14:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.136 14:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.136 [2024-12-09 14:40:46.996698] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:09.136 [2024-12-09 14:40:46.996792] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:09.136 [2024-12-09 14:40:46.996902] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:09.136 [2024-12-09 14:40:46.996989] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:09.136 [2024-12-09 14:40:46.997041] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:09.136 14:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.136 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.136 14:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.136 14:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.136 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:09.136 14:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.136 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:09.136 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:09.136 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:09.136 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:09.136 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:09.136 14:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.136 14:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.136 14:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.136 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:09.136 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:09.136 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:09.136 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:09.136 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:09.136 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:09.136 14:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.136 14:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.136 [2024-12-09 14:40:47.072559] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:09.136 [2024-12-09 14:40:47.072643] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:09.136 [2024-12-09 14:40:47.072661] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:09.136 [2024-12-09 14:40:47.072672] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:09.136 [2024-12-09 14:40:47.074979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:09.136 [2024-12-09 14:40:47.075024] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:09.136 [2024-12-09 14:40:47.075119] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:09.136 [2024-12-09 14:40:47.075173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:09.136 [2024-12-09 14:40:47.075295] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:09.136 [2024-12-09 14:40:47.075307] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:09.136 [2024-12-09 14:40:47.075567] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:09.136 [2024-12-09 14:40:47.075744] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:09.136 [2024-12-09 14:40:47.075755] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:09.136 [2024-12-09 14:40:47.075919] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:09.136 pt2 00:08:09.136 14:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.136 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:09.136 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:09.136 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:09.136 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:09.136 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:09.136 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:09.137 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.137 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.137 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.137 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.137 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.137 14:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.137 14:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.137 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:09.137 14:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.137 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.137 "name": "raid_bdev1", 00:08:09.137 "uuid": "f4cc1c4b-1a00-4573-b7f9-915ccb8bb9c7", 00:08:09.137 "strip_size_kb": 0, 00:08:09.137 "state": "online", 00:08:09.137 "raid_level": "raid1", 00:08:09.137 "superblock": true, 00:08:09.137 "num_base_bdevs": 2, 00:08:09.137 "num_base_bdevs_discovered": 1, 00:08:09.137 "num_base_bdevs_operational": 1, 00:08:09.137 "base_bdevs_list": [ 00:08:09.137 { 00:08:09.137 "name": null, 00:08:09.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.137 "is_configured": false, 00:08:09.137 "data_offset": 2048, 00:08:09.137 "data_size": 63488 00:08:09.137 }, 00:08:09.137 { 00:08:09.137 "name": "pt2", 00:08:09.137 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:09.137 "is_configured": true, 00:08:09.137 "data_offset": 2048, 00:08:09.137 "data_size": 63488 00:08:09.137 } 00:08:09.137 ] 00:08:09.137 }' 00:08:09.137 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.137 14:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.396 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:09.396 14:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.396 14:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.396 [2024-12-09 14:40:47.491800] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:09.396 [2024-12-09 14:40:47.491890] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:09.396 [2024-12-09 14:40:47.491992] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:09.396 [2024-12-09 14:40:47.492063] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:09.396 [2024-12-09 14:40:47.492108] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:09.396 14:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.396 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.396 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:09.396 14:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.396 14:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.396 14:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.655 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:09.655 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:09.656 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:09.656 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:09.656 14:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.656 14:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.656 [2024-12-09 14:40:47.555723] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:09.656 [2024-12-09 14:40:47.555789] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:09.656 [2024-12-09 14:40:47.555811] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:09.656 [2024-12-09 14:40:47.555821] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:09.656 [2024-12-09 14:40:47.557975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:09.656 [2024-12-09 14:40:47.558023] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:09.656 [2024-12-09 14:40:47.558109] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:09.656 [2024-12-09 14:40:47.558150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:09.656 [2024-12-09 14:40:47.558322] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:09.656 [2024-12-09 14:40:47.558334] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:09.656 [2024-12-09 14:40:47.558350] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:08:09.656 [2024-12-09 14:40:47.558413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:09.656 [2024-12-09 14:40:47.558487] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:08:09.656 [2024-12-09 14:40:47.558502] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:09.656 [2024-12-09 14:40:47.558766] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:09.656 [2024-12-09 14:40:47.558913] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:08:09.656 [2024-12-09 14:40:47.558926] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:08:09.656 [2024-12-09 14:40:47.559073] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:09.656 pt1 00:08:09.656 14:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.656 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:09.656 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:09.656 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:09.656 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:09.656 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:09.656 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:09.656 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:09.656 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.656 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.656 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.656 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.656 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.656 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:09.656 14:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.656 14:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.656 14:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.656 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.656 "name": "raid_bdev1", 00:08:09.656 "uuid": "f4cc1c4b-1a00-4573-b7f9-915ccb8bb9c7", 00:08:09.656 "strip_size_kb": 0, 00:08:09.656 "state": "online", 00:08:09.656 "raid_level": "raid1", 00:08:09.656 "superblock": true, 00:08:09.656 "num_base_bdevs": 2, 00:08:09.656 "num_base_bdevs_discovered": 1, 00:08:09.656 "num_base_bdevs_operational": 1, 00:08:09.656 "base_bdevs_list": [ 00:08:09.656 { 00:08:09.656 "name": null, 00:08:09.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.656 "is_configured": false, 00:08:09.656 "data_offset": 2048, 00:08:09.656 "data_size": 63488 00:08:09.656 }, 00:08:09.656 { 00:08:09.656 "name": "pt2", 00:08:09.656 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:09.656 "is_configured": true, 00:08:09.656 "data_offset": 2048, 00:08:09.656 "data_size": 63488 00:08:09.656 } 00:08:09.656 ] 00:08:09.656 }' 00:08:09.656 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.656 14:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.915 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:09.915 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:09.915 14:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.915 14:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.915 14:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.915 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:09.915 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:09.915 14:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:09.915 14:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.915 14:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.915 [2024-12-09 14:40:47.999288] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:09.915 14:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.175 14:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' f4cc1c4b-1a00-4573-b7f9-915ccb8bb9c7 '!=' f4cc1c4b-1a00-4573-b7f9-915ccb8bb9c7 ']' 00:08:10.175 14:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 64490 00:08:10.175 14:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 64490 ']' 00:08:10.175 14:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 64490 00:08:10.175 14:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:10.175 14:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:10.175 14:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64490 00:08:10.175 14:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:10.175 14:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:10.175 14:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64490' 00:08:10.175 killing process with pid 64490 00:08:10.175 14:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 64490 00:08:10.175 [2024-12-09 14:40:48.082939] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:10.175 [2024-12-09 14:40:48.083114] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:10.175 14:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 64490 00:08:10.175 [2024-12-09 14:40:48.083204] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:10.175 [2024-12-09 14:40:48.083225] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:08:10.175 [2024-12-09 14:40:48.288349] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:11.555 14:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:11.555 00:08:11.555 real 0m5.953s 00:08:11.555 user 0m8.967s 00:08:11.555 sys 0m1.048s 00:08:11.555 14:40:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.555 14:40:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.555 ************************************ 00:08:11.555 END TEST raid_superblock_test 00:08:11.555 ************************************ 00:08:11.555 14:40:49 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:11.555 14:40:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:11.556 14:40:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.556 14:40:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:11.556 ************************************ 00:08:11.556 START TEST raid_read_error_test 00:08:11.556 ************************************ 00:08:11.556 14:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:08:11.556 14:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:11.556 14:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:11.556 14:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:11.556 14:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:11.556 14:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:11.556 14:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:11.556 14:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:11.556 14:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:11.556 14:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:11.556 14:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:11.556 14:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:11.556 14:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:11.556 14:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:11.556 14:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:11.556 14:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:11.556 14:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:11.556 14:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:11.556 14:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:11.556 14:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:11.556 14:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:11.556 14:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:11.556 14:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.1pWjcJS5mf 00:08:11.556 14:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=64821 00:08:11.556 14:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:11.556 14:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 64821 00:08:11.556 14:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 64821 ']' 00:08:11.556 14:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.556 14:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:11.556 14:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.556 14:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:11.556 14:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.556 [2024-12-09 14:40:49.593065] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:08:11.556 [2024-12-09 14:40:49.593265] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64821 ] 00:08:11.816 [2024-12-09 14:40:49.754349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.816 [2024-12-09 14:40:49.866089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.076 [2024-12-09 14:40:50.065816] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:12.076 [2024-12-09 14:40:50.065860] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.645 BaseBdev1_malloc 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.645 true 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.645 [2024-12-09 14:40:50.546042] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:12.645 [2024-12-09 14:40:50.546154] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:12.645 [2024-12-09 14:40:50.546198] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:12.645 [2024-12-09 14:40:50.546230] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:12.645 [2024-12-09 14:40:50.548255] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:12.645 [2024-12-09 14:40:50.548347] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:12.645 BaseBdev1 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.645 BaseBdev2_malloc 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.645 true 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.645 [2024-12-09 14:40:50.612350] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:12.645 [2024-12-09 14:40:50.612453] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:12.645 [2024-12-09 14:40:50.612488] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:12.645 [2024-12-09 14:40:50.612519] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:12.645 [2024-12-09 14:40:50.614559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:12.645 [2024-12-09 14:40:50.614639] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:12.645 BaseBdev2 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.645 [2024-12-09 14:40:50.624384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:12.645 [2024-12-09 14:40:50.626196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:12.645 [2024-12-09 14:40:50.626445] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:12.645 [2024-12-09 14:40:50.626494] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:12.645 [2024-12-09 14:40:50.626745] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:12.645 [2024-12-09 14:40:50.626957] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:12.645 [2024-12-09 14:40:50.626999] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:12.645 [2024-12-09 14:40:50.627200] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.645 "name": "raid_bdev1", 00:08:12.645 "uuid": "ce4bf334-4fa5-4773-9dad-642f10b28106", 00:08:12.645 "strip_size_kb": 0, 00:08:12.645 "state": "online", 00:08:12.645 "raid_level": "raid1", 00:08:12.645 "superblock": true, 00:08:12.645 "num_base_bdevs": 2, 00:08:12.645 "num_base_bdevs_discovered": 2, 00:08:12.645 "num_base_bdevs_operational": 2, 00:08:12.645 "base_bdevs_list": [ 00:08:12.645 { 00:08:12.645 "name": "BaseBdev1", 00:08:12.645 "uuid": "6babbeb2-2ed5-53a0-903c-c7be3564ec06", 00:08:12.645 "is_configured": true, 00:08:12.645 "data_offset": 2048, 00:08:12.645 "data_size": 63488 00:08:12.645 }, 00:08:12.645 { 00:08:12.645 "name": "BaseBdev2", 00:08:12.645 "uuid": "f5d3f516-078a-5d81-aa51-c70f282d588e", 00:08:12.645 "is_configured": true, 00:08:12.645 "data_offset": 2048, 00:08:12.645 "data_size": 63488 00:08:12.645 } 00:08:12.645 ] 00:08:12.645 }' 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.645 14:40:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.214 14:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:13.214 14:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:13.214 [2024-12-09 14:40:51.180669] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:14.154 14:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:14.154 14:40:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.154 14:40:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.154 14:40:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.154 14:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:14.154 14:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:14.154 14:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:14.154 14:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:14.154 14:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:14.154 14:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:14.154 14:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:14.154 14:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:14.154 14:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:14.154 14:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:14.154 14:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.154 14:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.154 14:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.155 14:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.155 14:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.155 14:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:14.155 14:40:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.155 14:40:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.155 14:40:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.155 14:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.155 "name": "raid_bdev1", 00:08:14.155 "uuid": "ce4bf334-4fa5-4773-9dad-642f10b28106", 00:08:14.155 "strip_size_kb": 0, 00:08:14.155 "state": "online", 00:08:14.155 "raid_level": "raid1", 00:08:14.155 "superblock": true, 00:08:14.155 "num_base_bdevs": 2, 00:08:14.155 "num_base_bdevs_discovered": 2, 00:08:14.155 "num_base_bdevs_operational": 2, 00:08:14.155 "base_bdevs_list": [ 00:08:14.155 { 00:08:14.155 "name": "BaseBdev1", 00:08:14.155 "uuid": "6babbeb2-2ed5-53a0-903c-c7be3564ec06", 00:08:14.155 "is_configured": true, 00:08:14.155 "data_offset": 2048, 00:08:14.155 "data_size": 63488 00:08:14.155 }, 00:08:14.155 { 00:08:14.155 "name": "BaseBdev2", 00:08:14.155 "uuid": "f5d3f516-078a-5d81-aa51-c70f282d588e", 00:08:14.155 "is_configured": true, 00:08:14.155 "data_offset": 2048, 00:08:14.155 "data_size": 63488 00:08:14.155 } 00:08:14.155 ] 00:08:14.155 }' 00:08:14.155 14:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.155 14:40:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.417 14:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:14.417 14:40:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.417 14:40:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.417 [2024-12-09 14:40:52.535170] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:14.679 [2024-12-09 14:40:52.535281] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:14.679 [2024-12-09 14:40:52.538040] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:14.679 [2024-12-09 14:40:52.538088] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:14.679 [2024-12-09 14:40:52.538173] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:14.679 [2024-12-09 14:40:52.538195] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:14.679 { 00:08:14.679 "results": [ 00:08:14.679 { 00:08:14.679 "job": "raid_bdev1", 00:08:14.679 "core_mask": "0x1", 00:08:14.679 "workload": "randrw", 00:08:14.679 "percentage": 50, 00:08:14.679 "status": "finished", 00:08:14.679 "queue_depth": 1, 00:08:14.679 "io_size": 131072, 00:08:14.679 "runtime": 1.355369, 00:08:14.679 "iops": 17441.744646660798, 00:08:14.679 "mibps": 2180.2180808325998, 00:08:14.679 "io_failed": 0, 00:08:14.679 "io_timeout": 0, 00:08:14.679 "avg_latency_us": 54.61441860808784, 00:08:14.679 "min_latency_us": 23.811353711790392, 00:08:14.679 "max_latency_us": 1395.1441048034935 00:08:14.679 } 00:08:14.679 ], 00:08:14.679 "core_count": 1 00:08:14.679 } 00:08:14.679 14:40:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.679 14:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 64821 00:08:14.679 14:40:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 64821 ']' 00:08:14.679 14:40:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 64821 00:08:14.679 14:40:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:14.679 14:40:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:14.679 14:40:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64821 00:08:14.679 killing process with pid 64821 00:08:14.679 14:40:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:14.679 14:40:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:14.679 14:40:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64821' 00:08:14.679 14:40:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 64821 00:08:14.679 [2024-12-09 14:40:52.585116] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:14.679 14:40:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 64821 00:08:14.679 [2024-12-09 14:40:52.719381] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:16.062 14:40:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.1pWjcJS5mf 00:08:16.062 14:40:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:16.062 14:40:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:16.062 14:40:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:16.062 14:40:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:16.062 ************************************ 00:08:16.062 END TEST raid_read_error_test 00:08:16.062 ************************************ 00:08:16.062 14:40:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:16.062 14:40:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:16.062 14:40:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:16.062 00:08:16.062 real 0m4.419s 00:08:16.062 user 0m5.348s 00:08:16.062 sys 0m0.545s 00:08:16.062 14:40:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.062 14:40:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.062 14:40:53 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:16.062 14:40:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:16.062 14:40:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.062 14:40:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:16.062 ************************************ 00:08:16.062 START TEST raid_write_error_test 00:08:16.062 ************************************ 00:08:16.062 14:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:08:16.062 14:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:16.062 14:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:16.062 14:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:16.062 14:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:16.062 14:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:16.062 14:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:16.062 14:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:16.062 14:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:16.062 14:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:16.062 14:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:16.062 14:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:16.062 14:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:16.062 14:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:16.062 14:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:16.062 14:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:16.062 14:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:16.062 14:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:16.062 14:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:16.062 14:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:16.062 14:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:16.062 14:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:16.062 14:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.7NG01wDeSs 00:08:16.062 14:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=64964 00:08:16.062 14:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 64964 00:08:16.062 14:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:16.062 14:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 64964 ']' 00:08:16.062 14:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.062 14:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:16.062 14:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.062 14:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:16.062 14:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.062 [2024-12-09 14:40:54.079333] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:08:16.062 [2024-12-09 14:40:54.079533] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64964 ] 00:08:16.322 [2024-12-09 14:40:54.255228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.322 [2024-12-09 14:40:54.370134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.581 [2024-12-09 14:40:54.567502] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:16.581 [2024-12-09 14:40:54.567545] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:16.840 14:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:16.840 14:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:16.840 14:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:16.840 14:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:16.840 14:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.840 14:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.840 BaseBdev1_malloc 00:08:16.840 14:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.840 14:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:16.840 14:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.840 14:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.100 true 00:08:17.100 14:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.100 14:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:17.100 14:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.100 14:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.100 [2024-12-09 14:40:54.974023] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:17.100 [2024-12-09 14:40:54.974079] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:17.100 [2024-12-09 14:40:54.974098] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:17.100 [2024-12-09 14:40:54.974108] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:17.100 [2024-12-09 14:40:54.976166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:17.100 [2024-12-09 14:40:54.976206] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:17.100 BaseBdev1 00:08:17.100 14:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.100 14:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:17.100 14:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:17.100 14:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.100 14:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.100 BaseBdev2_malloc 00:08:17.100 14:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.100 14:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:17.100 14:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.100 14:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.100 true 00:08:17.100 14:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.100 14:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:17.100 14:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.100 14:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.100 [2024-12-09 14:40:55.039228] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:17.100 [2024-12-09 14:40:55.039348] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:17.100 [2024-12-09 14:40:55.039373] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:17.100 [2024-12-09 14:40:55.039385] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:17.100 [2024-12-09 14:40:55.041559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:17.100 [2024-12-09 14:40:55.041629] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:17.100 BaseBdev2 00:08:17.100 14:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.100 14:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:17.100 14:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.100 14:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.100 [2024-12-09 14:40:55.051247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:17.100 [2024-12-09 14:40:55.053000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:17.100 [2024-12-09 14:40:55.053192] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:17.100 [2024-12-09 14:40:55.053208] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:17.100 [2024-12-09 14:40:55.053447] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:17.100 [2024-12-09 14:40:55.053642] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:17.100 [2024-12-09 14:40:55.053653] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:17.100 [2024-12-09 14:40:55.053813] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:17.100 14:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.100 14:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:17.100 14:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:17.100 14:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:17.100 14:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:17.100 14:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:17.100 14:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:17.100 14:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.100 14:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.100 14:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.100 14:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.100 14:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.100 14:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:17.100 14:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.100 14:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.100 14:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.100 14:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.100 "name": "raid_bdev1", 00:08:17.100 "uuid": "74afd1ac-9a01-46b9-a34a-8072dda16f33", 00:08:17.100 "strip_size_kb": 0, 00:08:17.100 "state": "online", 00:08:17.100 "raid_level": "raid1", 00:08:17.100 "superblock": true, 00:08:17.100 "num_base_bdevs": 2, 00:08:17.100 "num_base_bdevs_discovered": 2, 00:08:17.100 "num_base_bdevs_operational": 2, 00:08:17.100 "base_bdevs_list": [ 00:08:17.100 { 00:08:17.100 "name": "BaseBdev1", 00:08:17.100 "uuid": "5b77dceb-48ba-5cdd-a68d-7195fe2c850b", 00:08:17.100 "is_configured": true, 00:08:17.100 "data_offset": 2048, 00:08:17.100 "data_size": 63488 00:08:17.100 }, 00:08:17.101 { 00:08:17.101 "name": "BaseBdev2", 00:08:17.101 "uuid": "ffeec3fe-ddfe-5fde-b5fd-071d68446ade", 00:08:17.101 "is_configured": true, 00:08:17.101 "data_offset": 2048, 00:08:17.101 "data_size": 63488 00:08:17.101 } 00:08:17.101 ] 00:08:17.101 }' 00:08:17.101 14:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.101 14:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.359 14:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:17.359 14:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:17.622 [2024-12-09 14:40:55.571789] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:18.562 14:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:18.562 14:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.562 14:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.562 [2024-12-09 14:40:56.484155] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:18.562 [2024-12-09 14:40:56.484287] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:18.562 [2024-12-09 14:40:56.484521] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:08:18.562 14:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.562 14:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:18.562 14:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:18.562 14:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:18.562 14:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:18.562 14:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:18.562 14:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:18.562 14:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:18.562 14:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:18.562 14:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:18.562 14:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:18.562 14:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.562 14:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.562 14:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.562 14:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.562 14:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.562 14:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:18.562 14:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.562 14:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.562 14:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.562 14:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.562 "name": "raid_bdev1", 00:08:18.562 "uuid": "74afd1ac-9a01-46b9-a34a-8072dda16f33", 00:08:18.562 "strip_size_kb": 0, 00:08:18.562 "state": "online", 00:08:18.562 "raid_level": "raid1", 00:08:18.562 "superblock": true, 00:08:18.562 "num_base_bdevs": 2, 00:08:18.562 "num_base_bdevs_discovered": 1, 00:08:18.562 "num_base_bdevs_operational": 1, 00:08:18.562 "base_bdevs_list": [ 00:08:18.562 { 00:08:18.562 "name": null, 00:08:18.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.562 "is_configured": false, 00:08:18.562 "data_offset": 0, 00:08:18.562 "data_size": 63488 00:08:18.562 }, 00:08:18.562 { 00:08:18.562 "name": "BaseBdev2", 00:08:18.562 "uuid": "ffeec3fe-ddfe-5fde-b5fd-071d68446ade", 00:08:18.562 "is_configured": true, 00:08:18.562 "data_offset": 2048, 00:08:18.562 "data_size": 63488 00:08:18.562 } 00:08:18.562 ] 00:08:18.562 }' 00:08:18.562 14:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.562 14:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.822 14:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:18.822 14:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.822 14:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.822 [2024-12-09 14:40:56.922770] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:18.822 [2024-12-09 14:40:56.922858] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:18.822 [2024-12-09 14:40:56.925578] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:18.822 [2024-12-09 14:40:56.925656] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:18.822 [2024-12-09 14:40:56.925736] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:18.822 [2024-12-09 14:40:56.925783] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:18.822 { 00:08:18.822 "results": [ 00:08:18.822 { 00:08:18.822 "job": "raid_bdev1", 00:08:18.822 "core_mask": "0x1", 00:08:18.822 "workload": "randrw", 00:08:18.822 "percentage": 50, 00:08:18.822 "status": "finished", 00:08:18.822 "queue_depth": 1, 00:08:18.822 "io_size": 131072, 00:08:18.822 "runtime": 1.351795, 00:08:18.822 "iops": 20084.406289415187, 00:08:18.822 "mibps": 2510.5507861768983, 00:08:18.822 "io_failed": 0, 00:08:18.822 "io_timeout": 0, 00:08:18.822 "avg_latency_us": 47.09726169509518, 00:08:18.822 "min_latency_us": 23.36419213973799, 00:08:18.822 "max_latency_us": 1416.6078602620087 00:08:18.822 } 00:08:18.822 ], 00:08:18.822 "core_count": 1 00:08:18.822 } 00:08:18.822 14:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.822 14:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 64964 00:08:18.822 14:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 64964 ']' 00:08:18.822 14:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 64964 00:08:18.822 14:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:18.822 14:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:18.822 14:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64964 00:08:19.082 killing process with pid 64964 00:08:19.082 14:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:19.082 14:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:19.082 14:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64964' 00:08:19.082 14:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 64964 00:08:19.082 [2024-12-09 14:40:56.973171] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:19.082 14:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 64964 00:08:19.082 [2024-12-09 14:40:57.108006] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:20.463 14:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.7NG01wDeSs 00:08:20.463 14:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:20.463 14:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:20.463 14:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:20.463 14:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:20.463 ************************************ 00:08:20.463 END TEST raid_write_error_test 00:08:20.463 ************************************ 00:08:20.463 14:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:20.463 14:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:20.463 14:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:20.463 00:08:20.463 real 0m4.316s 00:08:20.463 user 0m5.145s 00:08:20.463 sys 0m0.542s 00:08:20.463 14:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.463 14:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.463 14:40:58 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:20.463 14:40:58 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:20.463 14:40:58 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:20.463 14:40:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:20.463 14:40:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.463 14:40:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:20.463 ************************************ 00:08:20.463 START TEST raid_state_function_test 00:08:20.463 ************************************ 00:08:20.463 14:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:08:20.463 14:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:20.463 14:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:20.463 14:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:20.463 14:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:20.463 14:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:20.463 14:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:20.463 14:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:20.463 14:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:20.463 14:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:20.463 14:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:20.463 14:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:20.463 14:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:20.463 14:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:20.463 14:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:20.463 14:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:20.463 14:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:20.463 14:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:20.463 14:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:20.463 14:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:20.463 14:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:20.463 14:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:20.463 14:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:20.463 14:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:20.463 14:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:20.463 14:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:20.463 14:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:20.463 14:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65102 00:08:20.463 14:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:20.463 14:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65102' 00:08:20.463 Process raid pid: 65102 00:08:20.463 14:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65102 00:08:20.463 14:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65102 ']' 00:08:20.463 14:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.463 14:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:20.463 14:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.463 14:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:20.463 14:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.463 [2024-12-09 14:40:58.453460] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:08:20.463 [2024-12-09 14:40:58.453582] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:20.722 [2024-12-09 14:40:58.628978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.722 [2024-12-09 14:40:58.742089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.982 [2024-12-09 14:40:58.956073] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:20.982 [2024-12-09 14:40:58.956116] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:21.241 14:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:21.241 14:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:21.241 14:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:21.241 14:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.241 14:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.241 [2024-12-09 14:40:59.298777] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:21.241 [2024-12-09 14:40:59.298848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:21.241 [2024-12-09 14:40:59.298859] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:21.241 [2024-12-09 14:40:59.298869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:21.241 [2024-12-09 14:40:59.298875] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:21.241 [2024-12-09 14:40:59.298884] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:21.241 14:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.241 14:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:21.241 14:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.241 14:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:21.241 14:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:21.241 14:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.241 14:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.241 14:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.241 14:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.241 14:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.241 14:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.241 14:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.241 14:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.241 14:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.241 14:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.241 14:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.241 14:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.241 "name": "Existed_Raid", 00:08:21.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.241 "strip_size_kb": 64, 00:08:21.241 "state": "configuring", 00:08:21.241 "raid_level": "raid0", 00:08:21.241 "superblock": false, 00:08:21.241 "num_base_bdevs": 3, 00:08:21.241 "num_base_bdevs_discovered": 0, 00:08:21.241 "num_base_bdevs_operational": 3, 00:08:21.241 "base_bdevs_list": [ 00:08:21.241 { 00:08:21.241 "name": "BaseBdev1", 00:08:21.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.241 "is_configured": false, 00:08:21.241 "data_offset": 0, 00:08:21.241 "data_size": 0 00:08:21.241 }, 00:08:21.241 { 00:08:21.241 "name": "BaseBdev2", 00:08:21.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.241 "is_configured": false, 00:08:21.241 "data_offset": 0, 00:08:21.241 "data_size": 0 00:08:21.241 }, 00:08:21.241 { 00:08:21.241 "name": "BaseBdev3", 00:08:21.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.241 "is_configured": false, 00:08:21.241 "data_offset": 0, 00:08:21.241 "data_size": 0 00:08:21.241 } 00:08:21.241 ] 00:08:21.241 }' 00:08:21.241 14:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.241 14:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.810 14:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:21.810 14:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.810 14:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.810 [2024-12-09 14:40:59.714048] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:21.810 [2024-12-09 14:40:59.714144] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:21.810 14:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.810 14:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:21.810 14:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.810 14:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.810 [2024-12-09 14:40:59.726013] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:21.810 [2024-12-09 14:40:59.726108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:21.810 [2024-12-09 14:40:59.726136] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:21.810 [2024-12-09 14:40:59.726158] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:21.810 [2024-12-09 14:40:59.726176] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:21.810 [2024-12-09 14:40:59.726197] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:21.810 14:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.810 14:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:21.810 14:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.810 14:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.811 [2024-12-09 14:40:59.775169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:21.811 BaseBdev1 00:08:21.811 14:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.811 14:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:21.811 14:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:21.811 14:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:21.811 14:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:21.811 14:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:21.811 14:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:21.811 14:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:21.811 14:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.811 14:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.811 14:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.811 14:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:21.811 14:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.811 14:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.811 [ 00:08:21.811 { 00:08:21.811 "name": "BaseBdev1", 00:08:21.811 "aliases": [ 00:08:21.811 "73c06b8e-ea3c-484c-938b-a62786df1009" 00:08:21.811 ], 00:08:21.811 "product_name": "Malloc disk", 00:08:21.811 "block_size": 512, 00:08:21.811 "num_blocks": 65536, 00:08:21.811 "uuid": "73c06b8e-ea3c-484c-938b-a62786df1009", 00:08:21.811 "assigned_rate_limits": { 00:08:21.811 "rw_ios_per_sec": 0, 00:08:21.811 "rw_mbytes_per_sec": 0, 00:08:21.811 "r_mbytes_per_sec": 0, 00:08:21.811 "w_mbytes_per_sec": 0 00:08:21.811 }, 00:08:21.811 "claimed": true, 00:08:21.811 "claim_type": "exclusive_write", 00:08:21.811 "zoned": false, 00:08:21.811 "supported_io_types": { 00:08:21.811 "read": true, 00:08:21.811 "write": true, 00:08:21.811 "unmap": true, 00:08:21.811 "flush": true, 00:08:21.811 "reset": true, 00:08:21.811 "nvme_admin": false, 00:08:21.811 "nvme_io": false, 00:08:21.811 "nvme_io_md": false, 00:08:21.811 "write_zeroes": true, 00:08:21.811 "zcopy": true, 00:08:21.811 "get_zone_info": false, 00:08:21.811 "zone_management": false, 00:08:21.811 "zone_append": false, 00:08:21.811 "compare": false, 00:08:21.811 "compare_and_write": false, 00:08:21.811 "abort": true, 00:08:21.811 "seek_hole": false, 00:08:21.811 "seek_data": false, 00:08:21.811 "copy": true, 00:08:21.811 "nvme_iov_md": false 00:08:21.811 }, 00:08:21.811 "memory_domains": [ 00:08:21.811 { 00:08:21.811 "dma_device_id": "system", 00:08:21.811 "dma_device_type": 1 00:08:21.811 }, 00:08:21.811 { 00:08:21.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.811 "dma_device_type": 2 00:08:21.811 } 00:08:21.811 ], 00:08:21.811 "driver_specific": {} 00:08:21.811 } 00:08:21.811 ] 00:08:21.811 14:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.811 14:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:21.811 14:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:21.811 14:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.811 14:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:21.811 14:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:21.811 14:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.811 14:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.811 14:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.811 14:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.811 14:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.811 14:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.811 14:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.811 14:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.811 14:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.811 14:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.811 14:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.811 14:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.811 "name": "Existed_Raid", 00:08:21.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.811 "strip_size_kb": 64, 00:08:21.811 "state": "configuring", 00:08:21.811 "raid_level": "raid0", 00:08:21.811 "superblock": false, 00:08:21.811 "num_base_bdevs": 3, 00:08:21.811 "num_base_bdevs_discovered": 1, 00:08:21.811 "num_base_bdevs_operational": 3, 00:08:21.811 "base_bdevs_list": [ 00:08:21.811 { 00:08:21.811 "name": "BaseBdev1", 00:08:21.811 "uuid": "73c06b8e-ea3c-484c-938b-a62786df1009", 00:08:21.811 "is_configured": true, 00:08:21.811 "data_offset": 0, 00:08:21.811 "data_size": 65536 00:08:21.811 }, 00:08:21.811 { 00:08:21.811 "name": "BaseBdev2", 00:08:21.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.811 "is_configured": false, 00:08:21.811 "data_offset": 0, 00:08:21.811 "data_size": 0 00:08:21.811 }, 00:08:21.811 { 00:08:21.811 "name": "BaseBdev3", 00:08:21.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.811 "is_configured": false, 00:08:21.811 "data_offset": 0, 00:08:21.811 "data_size": 0 00:08:21.811 } 00:08:21.811 ] 00:08:21.811 }' 00:08:21.811 14:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.811 14:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.381 14:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:22.381 14:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.381 14:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.381 [2024-12-09 14:41:00.290375] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:22.381 [2024-12-09 14:41:00.290436] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:22.381 14:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.381 14:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:22.381 14:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.381 14:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.381 [2024-12-09 14:41:00.302424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:22.381 [2024-12-09 14:41:00.304543] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:22.381 [2024-12-09 14:41:00.304595] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:22.381 [2024-12-09 14:41:00.304608] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:22.381 [2024-12-09 14:41:00.304619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:22.381 14:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.381 14:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:22.381 14:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:22.381 14:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:22.381 14:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.381 14:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.381 14:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:22.381 14:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.381 14:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:22.381 14:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.381 14:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.381 14:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.381 14:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.381 14:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.381 14:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.381 14:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.381 14:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.381 14:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.381 14:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.381 "name": "Existed_Raid", 00:08:22.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.381 "strip_size_kb": 64, 00:08:22.381 "state": "configuring", 00:08:22.381 "raid_level": "raid0", 00:08:22.381 "superblock": false, 00:08:22.381 "num_base_bdevs": 3, 00:08:22.381 "num_base_bdevs_discovered": 1, 00:08:22.381 "num_base_bdevs_operational": 3, 00:08:22.381 "base_bdevs_list": [ 00:08:22.381 { 00:08:22.381 "name": "BaseBdev1", 00:08:22.381 "uuid": "73c06b8e-ea3c-484c-938b-a62786df1009", 00:08:22.381 "is_configured": true, 00:08:22.381 "data_offset": 0, 00:08:22.381 "data_size": 65536 00:08:22.381 }, 00:08:22.381 { 00:08:22.381 "name": "BaseBdev2", 00:08:22.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.381 "is_configured": false, 00:08:22.381 "data_offset": 0, 00:08:22.381 "data_size": 0 00:08:22.381 }, 00:08:22.381 { 00:08:22.381 "name": "BaseBdev3", 00:08:22.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.381 "is_configured": false, 00:08:22.381 "data_offset": 0, 00:08:22.381 "data_size": 0 00:08:22.381 } 00:08:22.381 ] 00:08:22.381 }' 00:08:22.381 14:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.381 14:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.641 14:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:22.641 14:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.641 14:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.901 [2024-12-09 14:41:00.782728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:22.901 BaseBdev2 00:08:22.901 14:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.901 14:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:22.901 14:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:22.901 14:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:22.901 14:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:22.901 14:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:22.901 14:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:22.901 14:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:22.901 14:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.901 14:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.901 14:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.901 14:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:22.901 14:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.901 14:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.901 [ 00:08:22.901 { 00:08:22.901 "name": "BaseBdev2", 00:08:22.901 "aliases": [ 00:08:22.901 "0c085292-97a2-4628-a117-687cb537ca51" 00:08:22.901 ], 00:08:22.901 "product_name": "Malloc disk", 00:08:22.901 "block_size": 512, 00:08:22.901 "num_blocks": 65536, 00:08:22.901 "uuid": "0c085292-97a2-4628-a117-687cb537ca51", 00:08:22.901 "assigned_rate_limits": { 00:08:22.901 "rw_ios_per_sec": 0, 00:08:22.901 "rw_mbytes_per_sec": 0, 00:08:22.901 "r_mbytes_per_sec": 0, 00:08:22.901 "w_mbytes_per_sec": 0 00:08:22.901 }, 00:08:22.901 "claimed": true, 00:08:22.901 "claim_type": "exclusive_write", 00:08:22.901 "zoned": false, 00:08:22.901 "supported_io_types": { 00:08:22.901 "read": true, 00:08:22.901 "write": true, 00:08:22.901 "unmap": true, 00:08:22.901 "flush": true, 00:08:22.901 "reset": true, 00:08:22.901 "nvme_admin": false, 00:08:22.901 "nvme_io": false, 00:08:22.901 "nvme_io_md": false, 00:08:22.901 "write_zeroes": true, 00:08:22.901 "zcopy": true, 00:08:22.901 "get_zone_info": false, 00:08:22.901 "zone_management": false, 00:08:22.901 "zone_append": false, 00:08:22.901 "compare": false, 00:08:22.901 "compare_and_write": false, 00:08:22.901 "abort": true, 00:08:22.901 "seek_hole": false, 00:08:22.901 "seek_data": false, 00:08:22.901 "copy": true, 00:08:22.901 "nvme_iov_md": false 00:08:22.901 }, 00:08:22.901 "memory_domains": [ 00:08:22.901 { 00:08:22.901 "dma_device_id": "system", 00:08:22.901 "dma_device_type": 1 00:08:22.901 }, 00:08:22.901 { 00:08:22.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.901 "dma_device_type": 2 00:08:22.901 } 00:08:22.901 ], 00:08:22.901 "driver_specific": {} 00:08:22.901 } 00:08:22.901 ] 00:08:22.901 14:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.901 14:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:22.901 14:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:22.901 14:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:22.901 14:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:22.901 14:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.901 14:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.901 14:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:22.901 14:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.901 14:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:22.901 14:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.901 14:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.901 14:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.902 14:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.902 14:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.902 14:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.902 14:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.902 14:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.902 14:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.902 14:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.902 "name": "Existed_Raid", 00:08:22.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.902 "strip_size_kb": 64, 00:08:22.902 "state": "configuring", 00:08:22.902 "raid_level": "raid0", 00:08:22.902 "superblock": false, 00:08:22.902 "num_base_bdevs": 3, 00:08:22.902 "num_base_bdevs_discovered": 2, 00:08:22.902 "num_base_bdevs_operational": 3, 00:08:22.902 "base_bdevs_list": [ 00:08:22.902 { 00:08:22.902 "name": "BaseBdev1", 00:08:22.902 "uuid": "73c06b8e-ea3c-484c-938b-a62786df1009", 00:08:22.902 "is_configured": true, 00:08:22.902 "data_offset": 0, 00:08:22.902 "data_size": 65536 00:08:22.902 }, 00:08:22.902 { 00:08:22.902 "name": "BaseBdev2", 00:08:22.902 "uuid": "0c085292-97a2-4628-a117-687cb537ca51", 00:08:22.902 "is_configured": true, 00:08:22.902 "data_offset": 0, 00:08:22.902 "data_size": 65536 00:08:22.902 }, 00:08:22.902 { 00:08:22.902 "name": "BaseBdev3", 00:08:22.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.902 "is_configured": false, 00:08:22.902 "data_offset": 0, 00:08:22.902 "data_size": 0 00:08:22.902 } 00:08:22.902 ] 00:08:22.902 }' 00:08:22.902 14:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.902 14:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.161 14:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:23.161 14:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.161 14:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.421 [2024-12-09 14:41:01.347414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:23.421 [2024-12-09 14:41:01.347472] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:23.421 [2024-12-09 14:41:01.347488] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:23.421 [2024-12-09 14:41:01.347837] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:23.421 [2024-12-09 14:41:01.348047] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:23.421 [2024-12-09 14:41:01.348068] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:23.421 [2024-12-09 14:41:01.348389] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:23.421 BaseBdev3 00:08:23.421 14:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.421 14:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:23.421 14:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:23.421 14:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:23.421 14:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:23.421 14:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:23.421 14:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:23.421 14:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:23.421 14:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.421 14:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.421 14:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.421 14:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:23.421 14:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.421 14:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.421 [ 00:08:23.421 { 00:08:23.421 "name": "BaseBdev3", 00:08:23.421 "aliases": [ 00:08:23.421 "93c4e8cb-25ee-4c06-96eb-db43fa12d357" 00:08:23.421 ], 00:08:23.421 "product_name": "Malloc disk", 00:08:23.421 "block_size": 512, 00:08:23.421 "num_blocks": 65536, 00:08:23.421 "uuid": "93c4e8cb-25ee-4c06-96eb-db43fa12d357", 00:08:23.421 "assigned_rate_limits": { 00:08:23.421 "rw_ios_per_sec": 0, 00:08:23.421 "rw_mbytes_per_sec": 0, 00:08:23.421 "r_mbytes_per_sec": 0, 00:08:23.421 "w_mbytes_per_sec": 0 00:08:23.421 }, 00:08:23.421 "claimed": true, 00:08:23.421 "claim_type": "exclusive_write", 00:08:23.421 "zoned": false, 00:08:23.421 "supported_io_types": { 00:08:23.421 "read": true, 00:08:23.421 "write": true, 00:08:23.421 "unmap": true, 00:08:23.421 "flush": true, 00:08:23.421 "reset": true, 00:08:23.421 "nvme_admin": false, 00:08:23.421 "nvme_io": false, 00:08:23.421 "nvme_io_md": false, 00:08:23.421 "write_zeroes": true, 00:08:23.421 "zcopy": true, 00:08:23.421 "get_zone_info": false, 00:08:23.421 "zone_management": false, 00:08:23.421 "zone_append": false, 00:08:23.421 "compare": false, 00:08:23.421 "compare_and_write": false, 00:08:23.421 "abort": true, 00:08:23.421 "seek_hole": false, 00:08:23.421 "seek_data": false, 00:08:23.421 "copy": true, 00:08:23.421 "nvme_iov_md": false 00:08:23.421 }, 00:08:23.421 "memory_domains": [ 00:08:23.421 { 00:08:23.421 "dma_device_id": "system", 00:08:23.421 "dma_device_type": 1 00:08:23.421 }, 00:08:23.421 { 00:08:23.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.421 "dma_device_type": 2 00:08:23.421 } 00:08:23.421 ], 00:08:23.421 "driver_specific": {} 00:08:23.421 } 00:08:23.421 ] 00:08:23.421 14:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.421 14:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:23.421 14:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:23.421 14:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:23.421 14:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:23.421 14:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.421 14:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:23.421 14:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.421 14:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.421 14:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.421 14:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.421 14:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.421 14:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.421 14:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.421 14:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.421 14:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.421 14:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.421 14:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.421 14:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.421 14:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.421 "name": "Existed_Raid", 00:08:23.422 "uuid": "4f844114-126d-4f61-bb67-70a7a438c9eb", 00:08:23.422 "strip_size_kb": 64, 00:08:23.422 "state": "online", 00:08:23.422 "raid_level": "raid0", 00:08:23.422 "superblock": false, 00:08:23.422 "num_base_bdevs": 3, 00:08:23.422 "num_base_bdevs_discovered": 3, 00:08:23.422 "num_base_bdevs_operational": 3, 00:08:23.422 "base_bdevs_list": [ 00:08:23.422 { 00:08:23.422 "name": "BaseBdev1", 00:08:23.422 "uuid": "73c06b8e-ea3c-484c-938b-a62786df1009", 00:08:23.422 "is_configured": true, 00:08:23.422 "data_offset": 0, 00:08:23.422 "data_size": 65536 00:08:23.422 }, 00:08:23.422 { 00:08:23.422 "name": "BaseBdev2", 00:08:23.422 "uuid": "0c085292-97a2-4628-a117-687cb537ca51", 00:08:23.422 "is_configured": true, 00:08:23.422 "data_offset": 0, 00:08:23.422 "data_size": 65536 00:08:23.422 }, 00:08:23.422 { 00:08:23.422 "name": "BaseBdev3", 00:08:23.422 "uuid": "93c4e8cb-25ee-4c06-96eb-db43fa12d357", 00:08:23.422 "is_configured": true, 00:08:23.422 "data_offset": 0, 00:08:23.422 "data_size": 65536 00:08:23.422 } 00:08:23.422 ] 00:08:23.422 }' 00:08:23.422 14:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.422 14:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.990 14:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:23.990 14:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:23.990 14:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:23.990 14:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:23.990 14:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:23.990 14:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:23.990 14:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:23.990 14:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:23.990 14:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.990 14:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.990 [2024-12-09 14:41:01.827076] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:23.990 14:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.990 14:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:23.990 "name": "Existed_Raid", 00:08:23.990 "aliases": [ 00:08:23.990 "4f844114-126d-4f61-bb67-70a7a438c9eb" 00:08:23.990 ], 00:08:23.990 "product_name": "Raid Volume", 00:08:23.990 "block_size": 512, 00:08:23.990 "num_blocks": 196608, 00:08:23.990 "uuid": "4f844114-126d-4f61-bb67-70a7a438c9eb", 00:08:23.990 "assigned_rate_limits": { 00:08:23.990 "rw_ios_per_sec": 0, 00:08:23.990 "rw_mbytes_per_sec": 0, 00:08:23.990 "r_mbytes_per_sec": 0, 00:08:23.990 "w_mbytes_per_sec": 0 00:08:23.990 }, 00:08:23.990 "claimed": false, 00:08:23.991 "zoned": false, 00:08:23.991 "supported_io_types": { 00:08:23.991 "read": true, 00:08:23.991 "write": true, 00:08:23.991 "unmap": true, 00:08:23.991 "flush": true, 00:08:23.991 "reset": true, 00:08:23.991 "nvme_admin": false, 00:08:23.991 "nvme_io": false, 00:08:23.991 "nvme_io_md": false, 00:08:23.991 "write_zeroes": true, 00:08:23.991 "zcopy": false, 00:08:23.991 "get_zone_info": false, 00:08:23.991 "zone_management": false, 00:08:23.991 "zone_append": false, 00:08:23.991 "compare": false, 00:08:23.991 "compare_and_write": false, 00:08:23.991 "abort": false, 00:08:23.991 "seek_hole": false, 00:08:23.991 "seek_data": false, 00:08:23.991 "copy": false, 00:08:23.991 "nvme_iov_md": false 00:08:23.991 }, 00:08:23.991 "memory_domains": [ 00:08:23.991 { 00:08:23.991 "dma_device_id": "system", 00:08:23.991 "dma_device_type": 1 00:08:23.991 }, 00:08:23.991 { 00:08:23.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.991 "dma_device_type": 2 00:08:23.991 }, 00:08:23.991 { 00:08:23.991 "dma_device_id": "system", 00:08:23.991 "dma_device_type": 1 00:08:23.991 }, 00:08:23.991 { 00:08:23.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.991 "dma_device_type": 2 00:08:23.991 }, 00:08:23.991 { 00:08:23.991 "dma_device_id": "system", 00:08:23.991 "dma_device_type": 1 00:08:23.991 }, 00:08:23.991 { 00:08:23.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.991 "dma_device_type": 2 00:08:23.991 } 00:08:23.991 ], 00:08:23.991 "driver_specific": { 00:08:23.991 "raid": { 00:08:23.991 "uuid": "4f844114-126d-4f61-bb67-70a7a438c9eb", 00:08:23.991 "strip_size_kb": 64, 00:08:23.991 "state": "online", 00:08:23.991 "raid_level": "raid0", 00:08:23.991 "superblock": false, 00:08:23.991 "num_base_bdevs": 3, 00:08:23.991 "num_base_bdevs_discovered": 3, 00:08:23.991 "num_base_bdevs_operational": 3, 00:08:23.991 "base_bdevs_list": [ 00:08:23.991 { 00:08:23.991 "name": "BaseBdev1", 00:08:23.991 "uuid": "73c06b8e-ea3c-484c-938b-a62786df1009", 00:08:23.991 "is_configured": true, 00:08:23.991 "data_offset": 0, 00:08:23.991 "data_size": 65536 00:08:23.991 }, 00:08:23.991 { 00:08:23.991 "name": "BaseBdev2", 00:08:23.991 "uuid": "0c085292-97a2-4628-a117-687cb537ca51", 00:08:23.991 "is_configured": true, 00:08:23.991 "data_offset": 0, 00:08:23.991 "data_size": 65536 00:08:23.991 }, 00:08:23.991 { 00:08:23.991 "name": "BaseBdev3", 00:08:23.991 "uuid": "93c4e8cb-25ee-4c06-96eb-db43fa12d357", 00:08:23.991 "is_configured": true, 00:08:23.991 "data_offset": 0, 00:08:23.991 "data_size": 65536 00:08:23.991 } 00:08:23.991 ] 00:08:23.991 } 00:08:23.991 } 00:08:23.991 }' 00:08:23.991 14:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:23.991 14:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:23.991 BaseBdev2 00:08:23.991 BaseBdev3' 00:08:23.991 14:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.991 14:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:23.991 14:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:23.991 14:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:23.991 14:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.991 14:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.991 14:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.991 14:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.991 14:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:23.991 14:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:23.991 14:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:23.991 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.991 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:23.991 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.991 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.991 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.991 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:23.991 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:23.991 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:23.991 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.991 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:23.991 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.991 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.991 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.991 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:23.991 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:23.991 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:23.991 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.991 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.991 [2024-12-09 14:41:02.070379] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:23.991 [2024-12-09 14:41:02.070413] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:23.991 [2024-12-09 14:41:02.070473] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:24.250 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.250 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:24.250 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:24.251 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:24.251 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:24.251 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:24.251 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:24.251 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.251 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:24.251 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.251 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.251 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:24.251 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.251 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.251 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.251 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.251 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.251 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.251 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.251 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.251 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.251 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.251 "name": "Existed_Raid", 00:08:24.251 "uuid": "4f844114-126d-4f61-bb67-70a7a438c9eb", 00:08:24.251 "strip_size_kb": 64, 00:08:24.251 "state": "offline", 00:08:24.251 "raid_level": "raid0", 00:08:24.251 "superblock": false, 00:08:24.251 "num_base_bdevs": 3, 00:08:24.251 "num_base_bdevs_discovered": 2, 00:08:24.251 "num_base_bdevs_operational": 2, 00:08:24.251 "base_bdevs_list": [ 00:08:24.251 { 00:08:24.251 "name": null, 00:08:24.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.251 "is_configured": false, 00:08:24.251 "data_offset": 0, 00:08:24.251 "data_size": 65536 00:08:24.251 }, 00:08:24.251 { 00:08:24.251 "name": "BaseBdev2", 00:08:24.251 "uuid": "0c085292-97a2-4628-a117-687cb537ca51", 00:08:24.251 "is_configured": true, 00:08:24.251 "data_offset": 0, 00:08:24.251 "data_size": 65536 00:08:24.251 }, 00:08:24.251 { 00:08:24.251 "name": "BaseBdev3", 00:08:24.251 "uuid": "93c4e8cb-25ee-4c06-96eb-db43fa12d357", 00:08:24.251 "is_configured": true, 00:08:24.251 "data_offset": 0, 00:08:24.251 "data_size": 65536 00:08:24.251 } 00:08:24.251 ] 00:08:24.251 }' 00:08:24.251 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.251 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.510 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:24.510 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:24.510 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.510 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:24.510 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.510 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.510 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.768 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:24.768 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:24.768 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:24.768 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.768 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.769 [2024-12-09 14:41:02.650443] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:24.769 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.769 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:24.769 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:24.769 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:24.769 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.769 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.769 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.769 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.769 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:24.769 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:24.769 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:24.769 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.769 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.769 [2024-12-09 14:41:02.791643] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:24.769 [2024-12-09 14:41:02.791695] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:25.028 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.028 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:25.028 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:25.028 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:25.028 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.028 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.028 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.028 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.028 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:25.028 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:25.028 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:25.028 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:25.028 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:25.028 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:25.028 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.028 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.028 BaseBdev2 00:08:25.028 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.028 14:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:25.028 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:25.028 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:25.028 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:25.028 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:25.028 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:25.028 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:25.028 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.028 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.028 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.028 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:25.028 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.028 14:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.028 [ 00:08:25.028 { 00:08:25.028 "name": "BaseBdev2", 00:08:25.028 "aliases": [ 00:08:25.028 "23b14a23-2707-4f49-92d0-5a9ec78c7daa" 00:08:25.028 ], 00:08:25.028 "product_name": "Malloc disk", 00:08:25.028 "block_size": 512, 00:08:25.028 "num_blocks": 65536, 00:08:25.028 "uuid": "23b14a23-2707-4f49-92d0-5a9ec78c7daa", 00:08:25.028 "assigned_rate_limits": { 00:08:25.028 "rw_ios_per_sec": 0, 00:08:25.028 "rw_mbytes_per_sec": 0, 00:08:25.028 "r_mbytes_per_sec": 0, 00:08:25.028 "w_mbytes_per_sec": 0 00:08:25.028 }, 00:08:25.028 "claimed": false, 00:08:25.028 "zoned": false, 00:08:25.028 "supported_io_types": { 00:08:25.028 "read": true, 00:08:25.028 "write": true, 00:08:25.028 "unmap": true, 00:08:25.028 "flush": true, 00:08:25.028 "reset": true, 00:08:25.028 "nvme_admin": false, 00:08:25.028 "nvme_io": false, 00:08:25.028 "nvme_io_md": false, 00:08:25.028 "write_zeroes": true, 00:08:25.028 "zcopy": true, 00:08:25.028 "get_zone_info": false, 00:08:25.028 "zone_management": false, 00:08:25.028 "zone_append": false, 00:08:25.028 "compare": false, 00:08:25.028 "compare_and_write": false, 00:08:25.028 "abort": true, 00:08:25.028 "seek_hole": false, 00:08:25.028 "seek_data": false, 00:08:25.028 "copy": true, 00:08:25.028 "nvme_iov_md": false 00:08:25.028 }, 00:08:25.028 "memory_domains": [ 00:08:25.028 { 00:08:25.028 "dma_device_id": "system", 00:08:25.028 "dma_device_type": 1 00:08:25.028 }, 00:08:25.028 { 00:08:25.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.028 "dma_device_type": 2 00:08:25.028 } 00:08:25.028 ], 00:08:25.028 "driver_specific": {} 00:08:25.028 } 00:08:25.028 ] 00:08:25.028 14:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.028 14:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:25.028 14:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:25.028 14:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:25.028 14:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:25.028 14:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.028 14:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.028 BaseBdev3 00:08:25.028 14:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.028 14:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:25.028 14:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:25.028 14:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:25.028 14:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:25.028 14:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:25.028 14:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:25.028 14:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:25.028 14:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.028 14:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.028 14:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.028 14:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:25.028 14:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.028 14:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.028 [ 00:08:25.028 { 00:08:25.028 "name": "BaseBdev3", 00:08:25.028 "aliases": [ 00:08:25.028 "00f89cdd-6377-4dd5-a79b-ad74a28a7bb6" 00:08:25.028 ], 00:08:25.028 "product_name": "Malloc disk", 00:08:25.028 "block_size": 512, 00:08:25.028 "num_blocks": 65536, 00:08:25.028 "uuid": "00f89cdd-6377-4dd5-a79b-ad74a28a7bb6", 00:08:25.028 "assigned_rate_limits": { 00:08:25.028 "rw_ios_per_sec": 0, 00:08:25.028 "rw_mbytes_per_sec": 0, 00:08:25.028 "r_mbytes_per_sec": 0, 00:08:25.028 "w_mbytes_per_sec": 0 00:08:25.028 }, 00:08:25.028 "claimed": false, 00:08:25.028 "zoned": false, 00:08:25.028 "supported_io_types": { 00:08:25.028 "read": true, 00:08:25.028 "write": true, 00:08:25.028 "unmap": true, 00:08:25.028 "flush": true, 00:08:25.029 "reset": true, 00:08:25.029 "nvme_admin": false, 00:08:25.029 "nvme_io": false, 00:08:25.029 "nvme_io_md": false, 00:08:25.029 "write_zeroes": true, 00:08:25.029 "zcopy": true, 00:08:25.029 "get_zone_info": false, 00:08:25.029 "zone_management": false, 00:08:25.029 "zone_append": false, 00:08:25.029 "compare": false, 00:08:25.029 "compare_and_write": false, 00:08:25.029 "abort": true, 00:08:25.029 "seek_hole": false, 00:08:25.029 "seek_data": false, 00:08:25.029 "copy": true, 00:08:25.029 "nvme_iov_md": false 00:08:25.029 }, 00:08:25.029 "memory_domains": [ 00:08:25.029 { 00:08:25.029 "dma_device_id": "system", 00:08:25.029 "dma_device_type": 1 00:08:25.029 }, 00:08:25.029 { 00:08:25.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.029 "dma_device_type": 2 00:08:25.029 } 00:08:25.029 ], 00:08:25.029 "driver_specific": {} 00:08:25.029 } 00:08:25.029 ] 00:08:25.029 14:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.029 14:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:25.029 14:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:25.029 14:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:25.029 14:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:25.029 14:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.029 14:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.029 [2024-12-09 14:41:03.093244] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:25.029 [2024-12-09 14:41:03.093291] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:25.029 [2024-12-09 14:41:03.093314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:25.029 [2024-12-09 14:41:03.095399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:25.029 14:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.029 14:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:25.029 14:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.029 14:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.029 14:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.029 14:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.029 14:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.029 14:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.029 14:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.029 14:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.029 14:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.029 14:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.029 14:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.029 14:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.029 14:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.029 14:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.288 14:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.288 "name": "Existed_Raid", 00:08:25.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.288 "strip_size_kb": 64, 00:08:25.288 "state": "configuring", 00:08:25.288 "raid_level": "raid0", 00:08:25.288 "superblock": false, 00:08:25.288 "num_base_bdevs": 3, 00:08:25.288 "num_base_bdevs_discovered": 2, 00:08:25.288 "num_base_bdevs_operational": 3, 00:08:25.288 "base_bdevs_list": [ 00:08:25.288 { 00:08:25.288 "name": "BaseBdev1", 00:08:25.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.288 "is_configured": false, 00:08:25.288 "data_offset": 0, 00:08:25.288 "data_size": 0 00:08:25.288 }, 00:08:25.288 { 00:08:25.288 "name": "BaseBdev2", 00:08:25.288 "uuid": "23b14a23-2707-4f49-92d0-5a9ec78c7daa", 00:08:25.288 "is_configured": true, 00:08:25.288 "data_offset": 0, 00:08:25.288 "data_size": 65536 00:08:25.288 }, 00:08:25.288 { 00:08:25.288 "name": "BaseBdev3", 00:08:25.288 "uuid": "00f89cdd-6377-4dd5-a79b-ad74a28a7bb6", 00:08:25.288 "is_configured": true, 00:08:25.288 "data_offset": 0, 00:08:25.288 "data_size": 65536 00:08:25.288 } 00:08:25.288 ] 00:08:25.288 }' 00:08:25.288 14:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.288 14:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.548 14:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:25.548 14:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.548 14:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.548 [2024-12-09 14:41:03.512567] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:25.548 14:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.548 14:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:25.548 14:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.548 14:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.548 14:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.548 14:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.548 14:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.548 14:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.548 14:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.548 14:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.548 14:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.548 14:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.548 14:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.548 14:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.548 14:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.548 14:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.548 14:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.548 "name": "Existed_Raid", 00:08:25.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.548 "strip_size_kb": 64, 00:08:25.548 "state": "configuring", 00:08:25.548 "raid_level": "raid0", 00:08:25.548 "superblock": false, 00:08:25.548 "num_base_bdevs": 3, 00:08:25.548 "num_base_bdevs_discovered": 1, 00:08:25.548 "num_base_bdevs_operational": 3, 00:08:25.548 "base_bdevs_list": [ 00:08:25.548 { 00:08:25.548 "name": "BaseBdev1", 00:08:25.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.548 "is_configured": false, 00:08:25.548 "data_offset": 0, 00:08:25.548 "data_size": 0 00:08:25.548 }, 00:08:25.548 { 00:08:25.548 "name": null, 00:08:25.548 "uuid": "23b14a23-2707-4f49-92d0-5a9ec78c7daa", 00:08:25.548 "is_configured": false, 00:08:25.548 "data_offset": 0, 00:08:25.548 "data_size": 65536 00:08:25.548 }, 00:08:25.548 { 00:08:25.548 "name": "BaseBdev3", 00:08:25.548 "uuid": "00f89cdd-6377-4dd5-a79b-ad74a28a7bb6", 00:08:25.548 "is_configured": true, 00:08:25.548 "data_offset": 0, 00:08:25.548 "data_size": 65536 00:08:25.548 } 00:08:25.548 ] 00:08:25.548 }' 00:08:25.548 14:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.548 14:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.119 14:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.119 14:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:26.119 14:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.119 14:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.119 14:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.119 14:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:26.119 14:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:26.119 14:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.119 14:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.119 [2024-12-09 14:41:04.021477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:26.119 BaseBdev1 00:08:26.119 14:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.119 14:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:26.119 14:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:26.119 14:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:26.119 14:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:26.119 14:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:26.119 14:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:26.119 14:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:26.119 14:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.119 14:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.119 14:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.119 14:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:26.119 14:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.119 14:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.119 [ 00:08:26.119 { 00:08:26.119 "name": "BaseBdev1", 00:08:26.119 "aliases": [ 00:08:26.119 "9e40fb55-8909-4a4a-86d3-c3ab4524941b" 00:08:26.119 ], 00:08:26.119 "product_name": "Malloc disk", 00:08:26.119 "block_size": 512, 00:08:26.119 "num_blocks": 65536, 00:08:26.119 "uuid": "9e40fb55-8909-4a4a-86d3-c3ab4524941b", 00:08:26.119 "assigned_rate_limits": { 00:08:26.119 "rw_ios_per_sec": 0, 00:08:26.119 "rw_mbytes_per_sec": 0, 00:08:26.119 "r_mbytes_per_sec": 0, 00:08:26.119 "w_mbytes_per_sec": 0 00:08:26.119 }, 00:08:26.119 "claimed": true, 00:08:26.119 "claim_type": "exclusive_write", 00:08:26.119 "zoned": false, 00:08:26.119 "supported_io_types": { 00:08:26.119 "read": true, 00:08:26.119 "write": true, 00:08:26.119 "unmap": true, 00:08:26.119 "flush": true, 00:08:26.119 "reset": true, 00:08:26.119 "nvme_admin": false, 00:08:26.119 "nvme_io": false, 00:08:26.119 "nvme_io_md": false, 00:08:26.119 "write_zeroes": true, 00:08:26.119 "zcopy": true, 00:08:26.119 "get_zone_info": false, 00:08:26.119 "zone_management": false, 00:08:26.119 "zone_append": false, 00:08:26.119 "compare": false, 00:08:26.119 "compare_and_write": false, 00:08:26.119 "abort": true, 00:08:26.119 "seek_hole": false, 00:08:26.119 "seek_data": false, 00:08:26.119 "copy": true, 00:08:26.119 "nvme_iov_md": false 00:08:26.119 }, 00:08:26.119 "memory_domains": [ 00:08:26.119 { 00:08:26.119 "dma_device_id": "system", 00:08:26.119 "dma_device_type": 1 00:08:26.119 }, 00:08:26.119 { 00:08:26.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.119 "dma_device_type": 2 00:08:26.119 } 00:08:26.119 ], 00:08:26.119 "driver_specific": {} 00:08:26.119 } 00:08:26.119 ] 00:08:26.119 14:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.119 14:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:26.119 14:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:26.119 14:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.119 14:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.119 14:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:26.119 14:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.119 14:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.119 14:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.119 14:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.119 14:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.119 14:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.119 14:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.119 14:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.119 14:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.119 14:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.119 14:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.119 14:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.119 "name": "Existed_Raid", 00:08:26.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.119 "strip_size_kb": 64, 00:08:26.119 "state": "configuring", 00:08:26.119 "raid_level": "raid0", 00:08:26.119 "superblock": false, 00:08:26.119 "num_base_bdevs": 3, 00:08:26.119 "num_base_bdevs_discovered": 2, 00:08:26.119 "num_base_bdevs_operational": 3, 00:08:26.119 "base_bdevs_list": [ 00:08:26.119 { 00:08:26.119 "name": "BaseBdev1", 00:08:26.119 "uuid": "9e40fb55-8909-4a4a-86d3-c3ab4524941b", 00:08:26.119 "is_configured": true, 00:08:26.119 "data_offset": 0, 00:08:26.119 "data_size": 65536 00:08:26.119 }, 00:08:26.119 { 00:08:26.119 "name": null, 00:08:26.119 "uuid": "23b14a23-2707-4f49-92d0-5a9ec78c7daa", 00:08:26.119 "is_configured": false, 00:08:26.119 "data_offset": 0, 00:08:26.119 "data_size": 65536 00:08:26.119 }, 00:08:26.119 { 00:08:26.119 "name": "BaseBdev3", 00:08:26.119 "uuid": "00f89cdd-6377-4dd5-a79b-ad74a28a7bb6", 00:08:26.119 "is_configured": true, 00:08:26.119 "data_offset": 0, 00:08:26.119 "data_size": 65536 00:08:26.119 } 00:08:26.119 ] 00:08:26.119 }' 00:08:26.119 14:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.119 14:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.689 14:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:26.689 14:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.689 14:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.689 14:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.689 14:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.689 14:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:26.689 14:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:26.689 14:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.689 14:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.689 [2024-12-09 14:41:04.568625] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:26.689 14:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.689 14:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:26.689 14:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.689 14:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.689 14:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:26.689 14:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.689 14:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.689 14:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.689 14:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.689 14:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.689 14:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.689 14:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.689 14:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.689 14:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.689 14:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.689 14:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.689 14:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.689 "name": "Existed_Raid", 00:08:26.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.689 "strip_size_kb": 64, 00:08:26.689 "state": "configuring", 00:08:26.689 "raid_level": "raid0", 00:08:26.689 "superblock": false, 00:08:26.689 "num_base_bdevs": 3, 00:08:26.689 "num_base_bdevs_discovered": 1, 00:08:26.689 "num_base_bdevs_operational": 3, 00:08:26.689 "base_bdevs_list": [ 00:08:26.689 { 00:08:26.689 "name": "BaseBdev1", 00:08:26.689 "uuid": "9e40fb55-8909-4a4a-86d3-c3ab4524941b", 00:08:26.689 "is_configured": true, 00:08:26.689 "data_offset": 0, 00:08:26.689 "data_size": 65536 00:08:26.689 }, 00:08:26.689 { 00:08:26.689 "name": null, 00:08:26.689 "uuid": "23b14a23-2707-4f49-92d0-5a9ec78c7daa", 00:08:26.689 "is_configured": false, 00:08:26.689 "data_offset": 0, 00:08:26.689 "data_size": 65536 00:08:26.689 }, 00:08:26.689 { 00:08:26.689 "name": null, 00:08:26.689 "uuid": "00f89cdd-6377-4dd5-a79b-ad74a28a7bb6", 00:08:26.689 "is_configured": false, 00:08:26.689 "data_offset": 0, 00:08:26.689 "data_size": 65536 00:08:26.689 } 00:08:26.689 ] 00:08:26.689 }' 00:08:26.689 14:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.689 14:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.950 14:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.950 14:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.950 14:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.950 14:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:26.950 14:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.950 14:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:26.950 14:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:26.950 14:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.950 14:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.209 [2024-12-09 14:41:05.071789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:27.209 14:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.209 14:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:27.209 14:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.209 14:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.209 14:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.209 14:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.209 14:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.209 14:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.209 14:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.209 14:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.209 14:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.209 14:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.209 14:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.209 14:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.209 14:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.209 14:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.209 14:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.210 "name": "Existed_Raid", 00:08:27.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.210 "strip_size_kb": 64, 00:08:27.210 "state": "configuring", 00:08:27.210 "raid_level": "raid0", 00:08:27.210 "superblock": false, 00:08:27.210 "num_base_bdevs": 3, 00:08:27.210 "num_base_bdevs_discovered": 2, 00:08:27.210 "num_base_bdevs_operational": 3, 00:08:27.210 "base_bdevs_list": [ 00:08:27.210 { 00:08:27.210 "name": "BaseBdev1", 00:08:27.210 "uuid": "9e40fb55-8909-4a4a-86d3-c3ab4524941b", 00:08:27.210 "is_configured": true, 00:08:27.210 "data_offset": 0, 00:08:27.210 "data_size": 65536 00:08:27.210 }, 00:08:27.210 { 00:08:27.210 "name": null, 00:08:27.210 "uuid": "23b14a23-2707-4f49-92d0-5a9ec78c7daa", 00:08:27.210 "is_configured": false, 00:08:27.210 "data_offset": 0, 00:08:27.210 "data_size": 65536 00:08:27.210 }, 00:08:27.210 { 00:08:27.210 "name": "BaseBdev3", 00:08:27.210 "uuid": "00f89cdd-6377-4dd5-a79b-ad74a28a7bb6", 00:08:27.210 "is_configured": true, 00:08:27.210 "data_offset": 0, 00:08:27.210 "data_size": 65536 00:08:27.210 } 00:08:27.210 ] 00:08:27.210 }' 00:08:27.210 14:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.210 14:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.469 14:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:27.469 14:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.469 14:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.469 14:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.469 14:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.469 14:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:27.469 14:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:27.469 14:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.469 14:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.469 [2024-12-09 14:41:05.527050] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:27.729 14:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.729 14:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:27.729 14:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.729 14:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.729 14:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.729 14:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.729 14:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.729 14:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.729 14:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.729 14:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.729 14:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.729 14:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.729 14:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.729 14:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.729 14:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.729 14:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.729 14:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.729 "name": "Existed_Raid", 00:08:27.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.729 "strip_size_kb": 64, 00:08:27.729 "state": "configuring", 00:08:27.729 "raid_level": "raid0", 00:08:27.729 "superblock": false, 00:08:27.729 "num_base_bdevs": 3, 00:08:27.729 "num_base_bdevs_discovered": 1, 00:08:27.729 "num_base_bdevs_operational": 3, 00:08:27.729 "base_bdevs_list": [ 00:08:27.729 { 00:08:27.729 "name": null, 00:08:27.729 "uuid": "9e40fb55-8909-4a4a-86d3-c3ab4524941b", 00:08:27.729 "is_configured": false, 00:08:27.729 "data_offset": 0, 00:08:27.729 "data_size": 65536 00:08:27.729 }, 00:08:27.729 { 00:08:27.729 "name": null, 00:08:27.729 "uuid": "23b14a23-2707-4f49-92d0-5a9ec78c7daa", 00:08:27.729 "is_configured": false, 00:08:27.729 "data_offset": 0, 00:08:27.729 "data_size": 65536 00:08:27.729 }, 00:08:27.729 { 00:08:27.729 "name": "BaseBdev3", 00:08:27.729 "uuid": "00f89cdd-6377-4dd5-a79b-ad74a28a7bb6", 00:08:27.729 "is_configured": true, 00:08:27.729 "data_offset": 0, 00:08:27.729 "data_size": 65536 00:08:27.729 } 00:08:27.729 ] 00:08:27.729 }' 00:08:27.729 14:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.729 14:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.988 14:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.988 14:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:27.988 14:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.988 14:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.988 14:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.248 14:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:28.248 14:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:28.248 14:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.248 14:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.248 [2024-12-09 14:41:06.131727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:28.248 14:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.248 14:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:28.248 14:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.248 14:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.248 14:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:28.248 14:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.248 14:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.248 14:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.248 14:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.248 14:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.248 14:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.248 14:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.248 14:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.248 14:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.248 14:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.248 14:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.248 14:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.248 "name": "Existed_Raid", 00:08:28.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.248 "strip_size_kb": 64, 00:08:28.248 "state": "configuring", 00:08:28.248 "raid_level": "raid0", 00:08:28.248 "superblock": false, 00:08:28.248 "num_base_bdevs": 3, 00:08:28.248 "num_base_bdevs_discovered": 2, 00:08:28.248 "num_base_bdevs_operational": 3, 00:08:28.248 "base_bdevs_list": [ 00:08:28.248 { 00:08:28.248 "name": null, 00:08:28.248 "uuid": "9e40fb55-8909-4a4a-86d3-c3ab4524941b", 00:08:28.248 "is_configured": false, 00:08:28.248 "data_offset": 0, 00:08:28.248 "data_size": 65536 00:08:28.248 }, 00:08:28.248 { 00:08:28.248 "name": "BaseBdev2", 00:08:28.248 "uuid": "23b14a23-2707-4f49-92d0-5a9ec78c7daa", 00:08:28.248 "is_configured": true, 00:08:28.248 "data_offset": 0, 00:08:28.248 "data_size": 65536 00:08:28.248 }, 00:08:28.248 { 00:08:28.248 "name": "BaseBdev3", 00:08:28.248 "uuid": "00f89cdd-6377-4dd5-a79b-ad74a28a7bb6", 00:08:28.248 "is_configured": true, 00:08:28.248 "data_offset": 0, 00:08:28.248 "data_size": 65536 00:08:28.248 } 00:08:28.248 ] 00:08:28.248 }' 00:08:28.248 14:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.248 14:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.507 14:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:28.507 14:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.507 14:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.507 14:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.507 14:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.507 14:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:28.507 14:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.507 14:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:28.507 14:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.507 14:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.507 14:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.767 14:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9e40fb55-8909-4a4a-86d3-c3ab4524941b 00:08:28.767 14:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.767 14:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.767 [2024-12-09 14:41:06.680950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:28.767 [2024-12-09 14:41:06.680994] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:28.767 [2024-12-09 14:41:06.681004] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:28.767 [2024-12-09 14:41:06.681275] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:28.767 [2024-12-09 14:41:06.681442] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:28.767 [2024-12-09 14:41:06.681457] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:28.767 [2024-12-09 14:41:06.681742] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:28.767 NewBaseBdev 00:08:28.767 14:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.767 14:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:28.767 14:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:28.767 14:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:28.767 14:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:28.767 14:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:28.767 14:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:28.767 14:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:28.767 14:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.767 14:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.767 14:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.767 14:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:28.767 14:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.767 14:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.767 [ 00:08:28.767 { 00:08:28.767 "name": "NewBaseBdev", 00:08:28.767 "aliases": [ 00:08:28.767 "9e40fb55-8909-4a4a-86d3-c3ab4524941b" 00:08:28.767 ], 00:08:28.767 "product_name": "Malloc disk", 00:08:28.767 "block_size": 512, 00:08:28.767 "num_blocks": 65536, 00:08:28.767 "uuid": "9e40fb55-8909-4a4a-86d3-c3ab4524941b", 00:08:28.767 "assigned_rate_limits": { 00:08:28.767 "rw_ios_per_sec": 0, 00:08:28.767 "rw_mbytes_per_sec": 0, 00:08:28.767 "r_mbytes_per_sec": 0, 00:08:28.767 "w_mbytes_per_sec": 0 00:08:28.767 }, 00:08:28.767 "claimed": true, 00:08:28.767 "claim_type": "exclusive_write", 00:08:28.767 "zoned": false, 00:08:28.767 "supported_io_types": { 00:08:28.767 "read": true, 00:08:28.767 "write": true, 00:08:28.767 "unmap": true, 00:08:28.767 "flush": true, 00:08:28.767 "reset": true, 00:08:28.767 "nvme_admin": false, 00:08:28.767 "nvme_io": false, 00:08:28.767 "nvme_io_md": false, 00:08:28.767 "write_zeroes": true, 00:08:28.767 "zcopy": true, 00:08:28.767 "get_zone_info": false, 00:08:28.767 "zone_management": false, 00:08:28.767 "zone_append": false, 00:08:28.767 "compare": false, 00:08:28.767 "compare_and_write": false, 00:08:28.767 "abort": true, 00:08:28.767 "seek_hole": false, 00:08:28.767 "seek_data": false, 00:08:28.767 "copy": true, 00:08:28.767 "nvme_iov_md": false 00:08:28.767 }, 00:08:28.767 "memory_domains": [ 00:08:28.767 { 00:08:28.767 "dma_device_id": "system", 00:08:28.767 "dma_device_type": 1 00:08:28.767 }, 00:08:28.767 { 00:08:28.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.767 "dma_device_type": 2 00:08:28.767 } 00:08:28.767 ], 00:08:28.767 "driver_specific": {} 00:08:28.767 } 00:08:28.767 ] 00:08:28.767 14:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.767 14:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:28.767 14:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:28.767 14:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.767 14:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:28.767 14:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:28.767 14:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.767 14:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.767 14:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.767 14:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.767 14:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.767 14:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.767 14:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.767 14:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.767 14:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.768 14:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.768 14:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.768 14:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.768 "name": "Existed_Raid", 00:08:28.768 "uuid": "42a49076-4bc8-43a0-8a10-4d8786eb4f16", 00:08:28.768 "strip_size_kb": 64, 00:08:28.768 "state": "online", 00:08:28.768 "raid_level": "raid0", 00:08:28.768 "superblock": false, 00:08:28.768 "num_base_bdevs": 3, 00:08:28.768 "num_base_bdevs_discovered": 3, 00:08:28.768 "num_base_bdevs_operational": 3, 00:08:28.768 "base_bdevs_list": [ 00:08:28.768 { 00:08:28.768 "name": "NewBaseBdev", 00:08:28.768 "uuid": "9e40fb55-8909-4a4a-86d3-c3ab4524941b", 00:08:28.768 "is_configured": true, 00:08:28.768 "data_offset": 0, 00:08:28.768 "data_size": 65536 00:08:28.768 }, 00:08:28.768 { 00:08:28.768 "name": "BaseBdev2", 00:08:28.768 "uuid": "23b14a23-2707-4f49-92d0-5a9ec78c7daa", 00:08:28.768 "is_configured": true, 00:08:28.768 "data_offset": 0, 00:08:28.768 "data_size": 65536 00:08:28.768 }, 00:08:28.768 { 00:08:28.768 "name": "BaseBdev3", 00:08:28.768 "uuid": "00f89cdd-6377-4dd5-a79b-ad74a28a7bb6", 00:08:28.768 "is_configured": true, 00:08:28.768 "data_offset": 0, 00:08:28.768 "data_size": 65536 00:08:28.768 } 00:08:28.768 ] 00:08:28.768 }' 00:08:28.768 14:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.768 14:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.027 14:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:29.027 14:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:29.027 14:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:29.027 14:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:29.027 14:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:29.027 14:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:29.027 14:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:29.027 14:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:29.027 14:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.027 14:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.027 [2024-12-09 14:41:07.112598] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:29.027 14:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.027 14:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:29.027 "name": "Existed_Raid", 00:08:29.027 "aliases": [ 00:08:29.027 "42a49076-4bc8-43a0-8a10-4d8786eb4f16" 00:08:29.027 ], 00:08:29.027 "product_name": "Raid Volume", 00:08:29.027 "block_size": 512, 00:08:29.027 "num_blocks": 196608, 00:08:29.027 "uuid": "42a49076-4bc8-43a0-8a10-4d8786eb4f16", 00:08:29.027 "assigned_rate_limits": { 00:08:29.027 "rw_ios_per_sec": 0, 00:08:29.027 "rw_mbytes_per_sec": 0, 00:08:29.027 "r_mbytes_per_sec": 0, 00:08:29.027 "w_mbytes_per_sec": 0 00:08:29.027 }, 00:08:29.027 "claimed": false, 00:08:29.027 "zoned": false, 00:08:29.027 "supported_io_types": { 00:08:29.027 "read": true, 00:08:29.027 "write": true, 00:08:29.027 "unmap": true, 00:08:29.027 "flush": true, 00:08:29.027 "reset": true, 00:08:29.027 "nvme_admin": false, 00:08:29.027 "nvme_io": false, 00:08:29.027 "nvme_io_md": false, 00:08:29.027 "write_zeroes": true, 00:08:29.027 "zcopy": false, 00:08:29.027 "get_zone_info": false, 00:08:29.027 "zone_management": false, 00:08:29.027 "zone_append": false, 00:08:29.027 "compare": false, 00:08:29.027 "compare_and_write": false, 00:08:29.027 "abort": false, 00:08:29.027 "seek_hole": false, 00:08:29.027 "seek_data": false, 00:08:29.027 "copy": false, 00:08:29.027 "nvme_iov_md": false 00:08:29.027 }, 00:08:29.027 "memory_domains": [ 00:08:29.027 { 00:08:29.027 "dma_device_id": "system", 00:08:29.027 "dma_device_type": 1 00:08:29.027 }, 00:08:29.027 { 00:08:29.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.027 "dma_device_type": 2 00:08:29.027 }, 00:08:29.027 { 00:08:29.027 "dma_device_id": "system", 00:08:29.027 "dma_device_type": 1 00:08:29.027 }, 00:08:29.027 { 00:08:29.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.027 "dma_device_type": 2 00:08:29.027 }, 00:08:29.027 { 00:08:29.027 "dma_device_id": "system", 00:08:29.027 "dma_device_type": 1 00:08:29.027 }, 00:08:29.027 { 00:08:29.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.027 "dma_device_type": 2 00:08:29.027 } 00:08:29.027 ], 00:08:29.027 "driver_specific": { 00:08:29.027 "raid": { 00:08:29.027 "uuid": "42a49076-4bc8-43a0-8a10-4d8786eb4f16", 00:08:29.027 "strip_size_kb": 64, 00:08:29.027 "state": "online", 00:08:29.027 "raid_level": "raid0", 00:08:29.027 "superblock": false, 00:08:29.027 "num_base_bdevs": 3, 00:08:29.027 "num_base_bdevs_discovered": 3, 00:08:29.027 "num_base_bdevs_operational": 3, 00:08:29.027 "base_bdevs_list": [ 00:08:29.027 { 00:08:29.027 "name": "NewBaseBdev", 00:08:29.027 "uuid": "9e40fb55-8909-4a4a-86d3-c3ab4524941b", 00:08:29.027 "is_configured": true, 00:08:29.027 "data_offset": 0, 00:08:29.027 "data_size": 65536 00:08:29.027 }, 00:08:29.027 { 00:08:29.027 "name": "BaseBdev2", 00:08:29.027 "uuid": "23b14a23-2707-4f49-92d0-5a9ec78c7daa", 00:08:29.027 "is_configured": true, 00:08:29.027 "data_offset": 0, 00:08:29.027 "data_size": 65536 00:08:29.027 }, 00:08:29.027 { 00:08:29.027 "name": "BaseBdev3", 00:08:29.027 "uuid": "00f89cdd-6377-4dd5-a79b-ad74a28a7bb6", 00:08:29.028 "is_configured": true, 00:08:29.028 "data_offset": 0, 00:08:29.028 "data_size": 65536 00:08:29.028 } 00:08:29.028 ] 00:08:29.028 } 00:08:29.028 } 00:08:29.028 }' 00:08:29.287 14:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:29.287 14:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:29.287 BaseBdev2 00:08:29.287 BaseBdev3' 00:08:29.287 14:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.287 14:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:29.287 14:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:29.287 14:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:29.287 14:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.287 14:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.287 14:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.287 14:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.287 14:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:29.287 14:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:29.287 14:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:29.287 14:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.287 14:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:29.287 14:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.287 14:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.287 14:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.287 14:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:29.287 14:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:29.287 14:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:29.287 14:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.287 14:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:29.287 14:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.287 14:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.287 14:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.287 14:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:29.288 14:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:29.288 14:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:29.288 14:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.288 14:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.288 [2024-12-09 14:41:07.343905] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:29.288 [2024-12-09 14:41:07.343941] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:29.288 [2024-12-09 14:41:07.344037] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:29.288 [2024-12-09 14:41:07.344097] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:29.288 [2024-12-09 14:41:07.344124] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:29.288 14:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.288 14:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65102 00:08:29.288 14:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65102 ']' 00:08:29.288 14:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65102 00:08:29.288 14:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:29.288 14:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:29.288 14:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65102 00:08:29.288 killing process with pid 65102 00:08:29.288 14:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:29.288 14:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:29.288 14:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65102' 00:08:29.288 14:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65102 00:08:29.288 14:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65102 00:08:29.288 [2024-12-09 14:41:07.392479] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:29.856 [2024-12-09 14:41:07.717352] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:30.794 14:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:30.794 00:08:30.794 real 0m10.538s 00:08:30.794 user 0m16.720s 00:08:30.794 sys 0m1.753s 00:08:30.794 14:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.794 ************************************ 00:08:30.794 END TEST raid_state_function_test 00:08:30.794 ************************************ 00:08:30.794 14:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.054 14:41:08 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:31.054 14:41:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:31.054 14:41:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.054 14:41:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:31.054 ************************************ 00:08:31.054 START TEST raid_state_function_test_sb 00:08:31.054 ************************************ 00:08:31.054 14:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:08:31.054 14:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:31.054 14:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:31.054 14:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:31.054 14:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:31.054 14:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:31.054 14:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:31.054 14:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:31.054 14:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:31.054 14:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:31.054 14:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:31.054 14:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:31.054 14:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:31.054 14:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:31.054 14:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:31.054 14:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:31.054 14:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:31.054 14:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:31.054 14:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:31.054 14:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:31.054 14:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:31.054 14:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:31.054 14:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:31.054 14:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:31.054 14:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:31.054 14:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:31.054 14:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:31.054 14:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=65729 00:08:31.054 Process raid pid: 65729 00:08:31.054 14:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:31.054 14:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65729' 00:08:31.054 14:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 65729 00:08:31.054 14:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 65729 ']' 00:08:31.054 14:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.054 14:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:31.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.054 14:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.054 14:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:31.054 14:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.054 [2024-12-09 14:41:09.056876] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:08:31.054 [2024-12-09 14:41:09.056992] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.325 [2024-12-09 14:41:09.215590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.325 [2024-12-09 14:41:09.339241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.599 [2024-12-09 14:41:09.561879] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.599 [2024-12-09 14:41:09.561932] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.858 14:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:31.858 14:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:31.858 14:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:31.858 14:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.858 14:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.858 [2024-12-09 14:41:09.910707] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:31.858 [2024-12-09 14:41:09.910767] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:31.858 [2024-12-09 14:41:09.910783] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:31.858 [2024-12-09 14:41:09.910795] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:31.858 [2024-12-09 14:41:09.910802] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:31.858 [2024-12-09 14:41:09.910813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:31.858 14:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.858 14:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:31.858 14:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.858 14:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:31.858 14:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:31.858 14:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.858 14:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.858 14:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.858 14:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.858 14:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.858 14:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.858 14:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.858 14:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.858 14:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.858 14:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.858 14:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.858 14:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.858 "name": "Existed_Raid", 00:08:31.858 "uuid": "2bbfb82b-b7f5-4576-9fea-07eef7fbdcef", 00:08:31.858 "strip_size_kb": 64, 00:08:31.858 "state": "configuring", 00:08:31.858 "raid_level": "raid0", 00:08:31.858 "superblock": true, 00:08:31.858 "num_base_bdevs": 3, 00:08:31.858 "num_base_bdevs_discovered": 0, 00:08:31.858 "num_base_bdevs_operational": 3, 00:08:31.858 "base_bdevs_list": [ 00:08:31.858 { 00:08:31.858 "name": "BaseBdev1", 00:08:31.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.858 "is_configured": false, 00:08:31.858 "data_offset": 0, 00:08:31.858 "data_size": 0 00:08:31.858 }, 00:08:31.858 { 00:08:31.858 "name": "BaseBdev2", 00:08:31.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.858 "is_configured": false, 00:08:31.858 "data_offset": 0, 00:08:31.858 "data_size": 0 00:08:31.858 }, 00:08:31.858 { 00:08:31.858 "name": "BaseBdev3", 00:08:31.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.858 "is_configured": false, 00:08:31.858 "data_offset": 0, 00:08:31.858 "data_size": 0 00:08:31.858 } 00:08:31.858 ] 00:08:31.858 }' 00:08:31.858 14:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.858 14:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.426 [2024-12-09 14:41:10.329916] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:32.426 [2024-12-09 14:41:10.329960] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.426 [2024-12-09 14:41:10.341881] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:32.426 [2024-12-09 14:41:10.341923] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:32.426 [2024-12-09 14:41:10.341932] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:32.426 [2024-12-09 14:41:10.341941] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:32.426 [2024-12-09 14:41:10.341948] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:32.426 [2024-12-09 14:41:10.341956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.426 [2024-12-09 14:41:10.392689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:32.426 BaseBdev1 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.426 [ 00:08:32.426 { 00:08:32.426 "name": "BaseBdev1", 00:08:32.426 "aliases": [ 00:08:32.426 "7ce0bbfa-9812-4772-99a5-cfc37f86c06a" 00:08:32.426 ], 00:08:32.426 "product_name": "Malloc disk", 00:08:32.426 "block_size": 512, 00:08:32.426 "num_blocks": 65536, 00:08:32.426 "uuid": "7ce0bbfa-9812-4772-99a5-cfc37f86c06a", 00:08:32.426 "assigned_rate_limits": { 00:08:32.426 "rw_ios_per_sec": 0, 00:08:32.426 "rw_mbytes_per_sec": 0, 00:08:32.426 "r_mbytes_per_sec": 0, 00:08:32.426 "w_mbytes_per_sec": 0 00:08:32.426 }, 00:08:32.426 "claimed": true, 00:08:32.426 "claim_type": "exclusive_write", 00:08:32.426 "zoned": false, 00:08:32.426 "supported_io_types": { 00:08:32.426 "read": true, 00:08:32.426 "write": true, 00:08:32.426 "unmap": true, 00:08:32.426 "flush": true, 00:08:32.426 "reset": true, 00:08:32.426 "nvme_admin": false, 00:08:32.426 "nvme_io": false, 00:08:32.426 "nvme_io_md": false, 00:08:32.426 "write_zeroes": true, 00:08:32.426 "zcopy": true, 00:08:32.426 "get_zone_info": false, 00:08:32.426 "zone_management": false, 00:08:32.426 "zone_append": false, 00:08:32.426 "compare": false, 00:08:32.426 "compare_and_write": false, 00:08:32.426 "abort": true, 00:08:32.426 "seek_hole": false, 00:08:32.426 "seek_data": false, 00:08:32.426 "copy": true, 00:08:32.426 "nvme_iov_md": false 00:08:32.426 }, 00:08:32.426 "memory_domains": [ 00:08:32.426 { 00:08:32.426 "dma_device_id": "system", 00:08:32.426 "dma_device_type": 1 00:08:32.426 }, 00:08:32.426 { 00:08:32.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.426 "dma_device_type": 2 00:08:32.426 } 00:08:32.426 ], 00:08:32.426 "driver_specific": {} 00:08:32.426 } 00:08:32.426 ] 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.426 14:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.426 "name": "Existed_Raid", 00:08:32.426 "uuid": "4c8f2044-5c5e-4fb1-85a2-db196856e271", 00:08:32.426 "strip_size_kb": 64, 00:08:32.427 "state": "configuring", 00:08:32.427 "raid_level": "raid0", 00:08:32.427 "superblock": true, 00:08:32.427 "num_base_bdevs": 3, 00:08:32.427 "num_base_bdevs_discovered": 1, 00:08:32.427 "num_base_bdevs_operational": 3, 00:08:32.427 "base_bdevs_list": [ 00:08:32.427 { 00:08:32.427 "name": "BaseBdev1", 00:08:32.427 "uuid": "7ce0bbfa-9812-4772-99a5-cfc37f86c06a", 00:08:32.427 "is_configured": true, 00:08:32.427 "data_offset": 2048, 00:08:32.427 "data_size": 63488 00:08:32.427 }, 00:08:32.427 { 00:08:32.427 "name": "BaseBdev2", 00:08:32.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.427 "is_configured": false, 00:08:32.427 "data_offset": 0, 00:08:32.427 "data_size": 0 00:08:32.427 }, 00:08:32.427 { 00:08:32.427 "name": "BaseBdev3", 00:08:32.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.427 "is_configured": false, 00:08:32.427 "data_offset": 0, 00:08:32.427 "data_size": 0 00:08:32.427 } 00:08:32.427 ] 00:08:32.427 }' 00:08:32.427 14:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.427 14:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.994 14:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:32.994 14:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.994 14:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.994 [2024-12-09 14:41:10.839986] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:32.994 [2024-12-09 14:41:10.840049] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:32.994 14:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.994 14:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:32.994 14:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.994 14:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.994 [2024-12-09 14:41:10.851997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:32.994 [2024-12-09 14:41:10.853908] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:32.994 [2024-12-09 14:41:10.853950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:32.994 [2024-12-09 14:41:10.853961] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:32.994 [2024-12-09 14:41:10.853971] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:32.994 14:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.994 14:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:32.994 14:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:32.994 14:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:32.994 14:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.994 14:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.994 14:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:32.994 14:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.994 14:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.994 14:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.994 14:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.994 14:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.994 14:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.994 14:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.994 14:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.994 14:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.994 14:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.994 14:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.994 14:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.994 "name": "Existed_Raid", 00:08:32.994 "uuid": "3fd0378f-327d-4dd3-b3a1-0948f0405c74", 00:08:32.994 "strip_size_kb": 64, 00:08:32.994 "state": "configuring", 00:08:32.994 "raid_level": "raid0", 00:08:32.994 "superblock": true, 00:08:32.994 "num_base_bdevs": 3, 00:08:32.994 "num_base_bdevs_discovered": 1, 00:08:32.994 "num_base_bdevs_operational": 3, 00:08:32.994 "base_bdevs_list": [ 00:08:32.994 { 00:08:32.994 "name": "BaseBdev1", 00:08:32.994 "uuid": "7ce0bbfa-9812-4772-99a5-cfc37f86c06a", 00:08:32.994 "is_configured": true, 00:08:32.994 "data_offset": 2048, 00:08:32.994 "data_size": 63488 00:08:32.994 }, 00:08:32.994 { 00:08:32.994 "name": "BaseBdev2", 00:08:32.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.994 "is_configured": false, 00:08:32.994 "data_offset": 0, 00:08:32.994 "data_size": 0 00:08:32.994 }, 00:08:32.994 { 00:08:32.994 "name": "BaseBdev3", 00:08:32.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.994 "is_configured": false, 00:08:32.994 "data_offset": 0, 00:08:32.994 "data_size": 0 00:08:32.994 } 00:08:32.994 ] 00:08:32.994 }' 00:08:32.994 14:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.994 14:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.253 14:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:33.253 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.253 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.253 [2024-12-09 14:41:11.358534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:33.253 BaseBdev2 00:08:33.253 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.253 14:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:33.253 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:33.253 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:33.253 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:33.253 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:33.253 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:33.253 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:33.253 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.253 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.253 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.253 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:33.253 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.253 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.512 [ 00:08:33.512 { 00:08:33.512 "name": "BaseBdev2", 00:08:33.512 "aliases": [ 00:08:33.512 "bb4266e3-1f38-48f3-a5fc-a56bce164659" 00:08:33.512 ], 00:08:33.512 "product_name": "Malloc disk", 00:08:33.512 "block_size": 512, 00:08:33.512 "num_blocks": 65536, 00:08:33.512 "uuid": "bb4266e3-1f38-48f3-a5fc-a56bce164659", 00:08:33.512 "assigned_rate_limits": { 00:08:33.512 "rw_ios_per_sec": 0, 00:08:33.512 "rw_mbytes_per_sec": 0, 00:08:33.512 "r_mbytes_per_sec": 0, 00:08:33.512 "w_mbytes_per_sec": 0 00:08:33.512 }, 00:08:33.512 "claimed": true, 00:08:33.512 "claim_type": "exclusive_write", 00:08:33.512 "zoned": false, 00:08:33.512 "supported_io_types": { 00:08:33.512 "read": true, 00:08:33.512 "write": true, 00:08:33.512 "unmap": true, 00:08:33.512 "flush": true, 00:08:33.512 "reset": true, 00:08:33.512 "nvme_admin": false, 00:08:33.512 "nvme_io": false, 00:08:33.512 "nvme_io_md": false, 00:08:33.512 "write_zeroes": true, 00:08:33.512 "zcopy": true, 00:08:33.512 "get_zone_info": false, 00:08:33.512 "zone_management": false, 00:08:33.513 "zone_append": false, 00:08:33.513 "compare": false, 00:08:33.513 "compare_and_write": false, 00:08:33.513 "abort": true, 00:08:33.513 "seek_hole": false, 00:08:33.513 "seek_data": false, 00:08:33.513 "copy": true, 00:08:33.513 "nvme_iov_md": false 00:08:33.513 }, 00:08:33.513 "memory_domains": [ 00:08:33.513 { 00:08:33.513 "dma_device_id": "system", 00:08:33.513 "dma_device_type": 1 00:08:33.513 }, 00:08:33.513 { 00:08:33.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.513 "dma_device_type": 2 00:08:33.513 } 00:08:33.513 ], 00:08:33.513 "driver_specific": {} 00:08:33.513 } 00:08:33.513 ] 00:08:33.513 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.513 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:33.513 14:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:33.513 14:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:33.513 14:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:33.513 14:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.513 14:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.513 14:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.513 14:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.513 14:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.513 14:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.513 14:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.513 14:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.513 14:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.513 14:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.513 14:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.513 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.513 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.513 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.513 14:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.513 "name": "Existed_Raid", 00:08:33.513 "uuid": "3fd0378f-327d-4dd3-b3a1-0948f0405c74", 00:08:33.513 "strip_size_kb": 64, 00:08:33.513 "state": "configuring", 00:08:33.513 "raid_level": "raid0", 00:08:33.513 "superblock": true, 00:08:33.513 "num_base_bdevs": 3, 00:08:33.513 "num_base_bdevs_discovered": 2, 00:08:33.513 "num_base_bdevs_operational": 3, 00:08:33.513 "base_bdevs_list": [ 00:08:33.513 { 00:08:33.513 "name": "BaseBdev1", 00:08:33.513 "uuid": "7ce0bbfa-9812-4772-99a5-cfc37f86c06a", 00:08:33.513 "is_configured": true, 00:08:33.513 "data_offset": 2048, 00:08:33.513 "data_size": 63488 00:08:33.513 }, 00:08:33.513 { 00:08:33.513 "name": "BaseBdev2", 00:08:33.513 "uuid": "bb4266e3-1f38-48f3-a5fc-a56bce164659", 00:08:33.513 "is_configured": true, 00:08:33.513 "data_offset": 2048, 00:08:33.513 "data_size": 63488 00:08:33.513 }, 00:08:33.513 { 00:08:33.513 "name": "BaseBdev3", 00:08:33.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.513 "is_configured": false, 00:08:33.513 "data_offset": 0, 00:08:33.513 "data_size": 0 00:08:33.513 } 00:08:33.513 ] 00:08:33.513 }' 00:08:33.513 14:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.513 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.772 14:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:33.772 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.772 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.772 [2024-12-09 14:41:11.851868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:33.772 [2024-12-09 14:41:11.852165] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:33.772 [2024-12-09 14:41:11.852192] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:33.772 [2024-12-09 14:41:11.852481] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:33.772 [2024-12-09 14:41:11.852684] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:33.772 [2024-12-09 14:41:11.852697] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:33.772 BaseBdev3 00:08:33.772 [2024-12-09 14:41:11.852866] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:33.772 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.772 14:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:33.772 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:33.772 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:33.772 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:33.772 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:33.772 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:33.772 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:33.772 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.772 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.772 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.772 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:33.772 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.772 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.772 [ 00:08:33.772 { 00:08:33.772 "name": "BaseBdev3", 00:08:33.772 "aliases": [ 00:08:33.772 "17912b47-5e8f-4950-8374-1d9a1a813cd2" 00:08:33.772 ], 00:08:33.772 "product_name": "Malloc disk", 00:08:33.772 "block_size": 512, 00:08:33.772 "num_blocks": 65536, 00:08:33.772 "uuid": "17912b47-5e8f-4950-8374-1d9a1a813cd2", 00:08:33.772 "assigned_rate_limits": { 00:08:33.772 "rw_ios_per_sec": 0, 00:08:33.772 "rw_mbytes_per_sec": 0, 00:08:33.772 "r_mbytes_per_sec": 0, 00:08:33.772 "w_mbytes_per_sec": 0 00:08:33.772 }, 00:08:33.772 "claimed": true, 00:08:33.772 "claim_type": "exclusive_write", 00:08:33.772 "zoned": false, 00:08:33.772 "supported_io_types": { 00:08:33.772 "read": true, 00:08:33.772 "write": true, 00:08:33.772 "unmap": true, 00:08:33.772 "flush": true, 00:08:33.772 "reset": true, 00:08:33.772 "nvme_admin": false, 00:08:33.772 "nvme_io": false, 00:08:33.772 "nvme_io_md": false, 00:08:33.772 "write_zeroes": true, 00:08:33.772 "zcopy": true, 00:08:33.772 "get_zone_info": false, 00:08:33.772 "zone_management": false, 00:08:33.772 "zone_append": false, 00:08:33.772 "compare": false, 00:08:33.772 "compare_and_write": false, 00:08:33.772 "abort": true, 00:08:33.772 "seek_hole": false, 00:08:33.772 "seek_data": false, 00:08:33.772 "copy": true, 00:08:33.772 "nvme_iov_md": false 00:08:33.772 }, 00:08:33.772 "memory_domains": [ 00:08:33.772 { 00:08:33.772 "dma_device_id": "system", 00:08:33.772 "dma_device_type": 1 00:08:33.772 }, 00:08:33.772 { 00:08:33.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.772 "dma_device_type": 2 00:08:33.772 } 00:08:33.772 ], 00:08:33.772 "driver_specific": {} 00:08:33.772 } 00:08:33.772 ] 00:08:33.772 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.772 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:33.772 14:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:33.772 14:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:33.772 14:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:33.772 14:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.772 14:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:33.772 14:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.772 14:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.772 14:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.772 14:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.772 14:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.772 14:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.772 14:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.772 14:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.772 14:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.031 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.031 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.031 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.031 14:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.031 "name": "Existed_Raid", 00:08:34.031 "uuid": "3fd0378f-327d-4dd3-b3a1-0948f0405c74", 00:08:34.031 "strip_size_kb": 64, 00:08:34.031 "state": "online", 00:08:34.031 "raid_level": "raid0", 00:08:34.031 "superblock": true, 00:08:34.031 "num_base_bdevs": 3, 00:08:34.031 "num_base_bdevs_discovered": 3, 00:08:34.031 "num_base_bdevs_operational": 3, 00:08:34.031 "base_bdevs_list": [ 00:08:34.031 { 00:08:34.031 "name": "BaseBdev1", 00:08:34.031 "uuid": "7ce0bbfa-9812-4772-99a5-cfc37f86c06a", 00:08:34.031 "is_configured": true, 00:08:34.031 "data_offset": 2048, 00:08:34.031 "data_size": 63488 00:08:34.031 }, 00:08:34.031 { 00:08:34.031 "name": "BaseBdev2", 00:08:34.031 "uuid": "bb4266e3-1f38-48f3-a5fc-a56bce164659", 00:08:34.031 "is_configured": true, 00:08:34.031 "data_offset": 2048, 00:08:34.031 "data_size": 63488 00:08:34.031 }, 00:08:34.031 { 00:08:34.031 "name": "BaseBdev3", 00:08:34.031 "uuid": "17912b47-5e8f-4950-8374-1d9a1a813cd2", 00:08:34.031 "is_configured": true, 00:08:34.031 "data_offset": 2048, 00:08:34.031 "data_size": 63488 00:08:34.031 } 00:08:34.031 ] 00:08:34.031 }' 00:08:34.031 14:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.031 14:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.289 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:34.289 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:34.289 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:34.289 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:34.289 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:34.289 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:34.289 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:34.289 14:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.289 14:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.289 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:34.289 [2024-12-09 14:41:12.331468] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:34.289 14:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.289 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:34.289 "name": "Existed_Raid", 00:08:34.289 "aliases": [ 00:08:34.289 "3fd0378f-327d-4dd3-b3a1-0948f0405c74" 00:08:34.289 ], 00:08:34.289 "product_name": "Raid Volume", 00:08:34.289 "block_size": 512, 00:08:34.289 "num_blocks": 190464, 00:08:34.289 "uuid": "3fd0378f-327d-4dd3-b3a1-0948f0405c74", 00:08:34.289 "assigned_rate_limits": { 00:08:34.289 "rw_ios_per_sec": 0, 00:08:34.289 "rw_mbytes_per_sec": 0, 00:08:34.289 "r_mbytes_per_sec": 0, 00:08:34.289 "w_mbytes_per_sec": 0 00:08:34.289 }, 00:08:34.290 "claimed": false, 00:08:34.290 "zoned": false, 00:08:34.290 "supported_io_types": { 00:08:34.290 "read": true, 00:08:34.290 "write": true, 00:08:34.290 "unmap": true, 00:08:34.290 "flush": true, 00:08:34.290 "reset": true, 00:08:34.290 "nvme_admin": false, 00:08:34.290 "nvme_io": false, 00:08:34.290 "nvme_io_md": false, 00:08:34.290 "write_zeroes": true, 00:08:34.290 "zcopy": false, 00:08:34.290 "get_zone_info": false, 00:08:34.290 "zone_management": false, 00:08:34.290 "zone_append": false, 00:08:34.290 "compare": false, 00:08:34.290 "compare_and_write": false, 00:08:34.290 "abort": false, 00:08:34.290 "seek_hole": false, 00:08:34.290 "seek_data": false, 00:08:34.290 "copy": false, 00:08:34.290 "nvme_iov_md": false 00:08:34.290 }, 00:08:34.290 "memory_domains": [ 00:08:34.290 { 00:08:34.290 "dma_device_id": "system", 00:08:34.290 "dma_device_type": 1 00:08:34.290 }, 00:08:34.290 { 00:08:34.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.290 "dma_device_type": 2 00:08:34.290 }, 00:08:34.290 { 00:08:34.290 "dma_device_id": "system", 00:08:34.290 "dma_device_type": 1 00:08:34.290 }, 00:08:34.290 { 00:08:34.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.290 "dma_device_type": 2 00:08:34.290 }, 00:08:34.290 { 00:08:34.290 "dma_device_id": "system", 00:08:34.290 "dma_device_type": 1 00:08:34.290 }, 00:08:34.290 { 00:08:34.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.290 "dma_device_type": 2 00:08:34.290 } 00:08:34.290 ], 00:08:34.290 "driver_specific": { 00:08:34.290 "raid": { 00:08:34.290 "uuid": "3fd0378f-327d-4dd3-b3a1-0948f0405c74", 00:08:34.290 "strip_size_kb": 64, 00:08:34.290 "state": "online", 00:08:34.290 "raid_level": "raid0", 00:08:34.290 "superblock": true, 00:08:34.290 "num_base_bdevs": 3, 00:08:34.290 "num_base_bdevs_discovered": 3, 00:08:34.290 "num_base_bdevs_operational": 3, 00:08:34.290 "base_bdevs_list": [ 00:08:34.290 { 00:08:34.290 "name": "BaseBdev1", 00:08:34.290 "uuid": "7ce0bbfa-9812-4772-99a5-cfc37f86c06a", 00:08:34.290 "is_configured": true, 00:08:34.290 "data_offset": 2048, 00:08:34.290 "data_size": 63488 00:08:34.290 }, 00:08:34.290 { 00:08:34.290 "name": "BaseBdev2", 00:08:34.290 "uuid": "bb4266e3-1f38-48f3-a5fc-a56bce164659", 00:08:34.290 "is_configured": true, 00:08:34.290 "data_offset": 2048, 00:08:34.290 "data_size": 63488 00:08:34.290 }, 00:08:34.290 { 00:08:34.290 "name": "BaseBdev3", 00:08:34.290 "uuid": "17912b47-5e8f-4950-8374-1d9a1a813cd2", 00:08:34.290 "is_configured": true, 00:08:34.290 "data_offset": 2048, 00:08:34.290 "data_size": 63488 00:08:34.290 } 00:08:34.290 ] 00:08:34.290 } 00:08:34.290 } 00:08:34.290 }' 00:08:34.290 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:34.290 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:34.290 BaseBdev2 00:08:34.290 BaseBdev3' 00:08:34.290 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.549 [2024-12-09 14:41:12.562799] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:34.549 [2024-12-09 14:41:12.562839] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:34.549 [2024-12-09 14:41:12.562899] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.549 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.807 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.807 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.807 14:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.807 14:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.807 14:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.807 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.807 "name": "Existed_Raid", 00:08:34.807 "uuid": "3fd0378f-327d-4dd3-b3a1-0948f0405c74", 00:08:34.807 "strip_size_kb": 64, 00:08:34.807 "state": "offline", 00:08:34.807 "raid_level": "raid0", 00:08:34.807 "superblock": true, 00:08:34.807 "num_base_bdevs": 3, 00:08:34.807 "num_base_bdevs_discovered": 2, 00:08:34.807 "num_base_bdevs_operational": 2, 00:08:34.807 "base_bdevs_list": [ 00:08:34.807 { 00:08:34.807 "name": null, 00:08:34.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.807 "is_configured": false, 00:08:34.807 "data_offset": 0, 00:08:34.807 "data_size": 63488 00:08:34.807 }, 00:08:34.807 { 00:08:34.807 "name": "BaseBdev2", 00:08:34.807 "uuid": "bb4266e3-1f38-48f3-a5fc-a56bce164659", 00:08:34.807 "is_configured": true, 00:08:34.807 "data_offset": 2048, 00:08:34.807 "data_size": 63488 00:08:34.807 }, 00:08:34.807 { 00:08:34.807 "name": "BaseBdev3", 00:08:34.807 "uuid": "17912b47-5e8f-4950-8374-1d9a1a813cd2", 00:08:34.807 "is_configured": true, 00:08:34.807 "data_offset": 2048, 00:08:34.807 "data_size": 63488 00:08:34.807 } 00:08:34.807 ] 00:08:34.807 }' 00:08:34.807 14:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.807 14:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.065 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:35.065 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:35.065 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.065 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.065 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.065 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:35.065 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.065 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:35.065 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:35.065 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:35.065 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.065 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.065 [2024-12-09 14:41:13.154684] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:35.324 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.324 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:35.324 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:35.324 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:35.324 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.324 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.324 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.324 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.324 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:35.324 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:35.324 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:35.324 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.324 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.324 [2024-12-09 14:41:13.289190] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:35.324 [2024-12-09 14:41:13.289246] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:35.324 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.324 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:35.324 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:35.324 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.324 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:35.324 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.324 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.324 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.324 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:35.324 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:35.324 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:35.324 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:35.324 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:35.324 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:35.324 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.324 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.583 BaseBdev2 00:08:35.583 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.583 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:35.583 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:35.583 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:35.583 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:35.583 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:35.583 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:35.583 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:35.583 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.583 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.583 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.583 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:35.583 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.583 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.583 [ 00:08:35.583 { 00:08:35.583 "name": "BaseBdev2", 00:08:35.583 "aliases": [ 00:08:35.583 "1862061b-f1f3-4c78-940b-910702d3362c" 00:08:35.583 ], 00:08:35.584 "product_name": "Malloc disk", 00:08:35.584 "block_size": 512, 00:08:35.584 "num_blocks": 65536, 00:08:35.584 "uuid": "1862061b-f1f3-4c78-940b-910702d3362c", 00:08:35.584 "assigned_rate_limits": { 00:08:35.584 "rw_ios_per_sec": 0, 00:08:35.584 "rw_mbytes_per_sec": 0, 00:08:35.584 "r_mbytes_per_sec": 0, 00:08:35.584 "w_mbytes_per_sec": 0 00:08:35.584 }, 00:08:35.584 "claimed": false, 00:08:35.584 "zoned": false, 00:08:35.584 "supported_io_types": { 00:08:35.584 "read": true, 00:08:35.584 "write": true, 00:08:35.584 "unmap": true, 00:08:35.584 "flush": true, 00:08:35.584 "reset": true, 00:08:35.584 "nvme_admin": false, 00:08:35.584 "nvme_io": false, 00:08:35.584 "nvme_io_md": false, 00:08:35.584 "write_zeroes": true, 00:08:35.584 "zcopy": true, 00:08:35.584 "get_zone_info": false, 00:08:35.584 "zone_management": false, 00:08:35.584 "zone_append": false, 00:08:35.584 "compare": false, 00:08:35.584 "compare_and_write": false, 00:08:35.584 "abort": true, 00:08:35.584 "seek_hole": false, 00:08:35.584 "seek_data": false, 00:08:35.584 "copy": true, 00:08:35.584 "nvme_iov_md": false 00:08:35.584 }, 00:08:35.584 "memory_domains": [ 00:08:35.584 { 00:08:35.584 "dma_device_id": "system", 00:08:35.584 "dma_device_type": 1 00:08:35.584 }, 00:08:35.584 { 00:08:35.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.584 "dma_device_type": 2 00:08:35.584 } 00:08:35.584 ], 00:08:35.584 "driver_specific": {} 00:08:35.584 } 00:08:35.584 ] 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.584 BaseBdev3 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.584 [ 00:08:35.584 { 00:08:35.584 "name": "BaseBdev3", 00:08:35.584 "aliases": [ 00:08:35.584 "44ba539b-23b6-4676-a6b2-b42cf0d3c0c0" 00:08:35.584 ], 00:08:35.584 "product_name": "Malloc disk", 00:08:35.584 "block_size": 512, 00:08:35.584 "num_blocks": 65536, 00:08:35.584 "uuid": "44ba539b-23b6-4676-a6b2-b42cf0d3c0c0", 00:08:35.584 "assigned_rate_limits": { 00:08:35.584 "rw_ios_per_sec": 0, 00:08:35.584 "rw_mbytes_per_sec": 0, 00:08:35.584 "r_mbytes_per_sec": 0, 00:08:35.584 "w_mbytes_per_sec": 0 00:08:35.584 }, 00:08:35.584 "claimed": false, 00:08:35.584 "zoned": false, 00:08:35.584 "supported_io_types": { 00:08:35.584 "read": true, 00:08:35.584 "write": true, 00:08:35.584 "unmap": true, 00:08:35.584 "flush": true, 00:08:35.584 "reset": true, 00:08:35.584 "nvme_admin": false, 00:08:35.584 "nvme_io": false, 00:08:35.584 "nvme_io_md": false, 00:08:35.584 "write_zeroes": true, 00:08:35.584 "zcopy": true, 00:08:35.584 "get_zone_info": false, 00:08:35.584 "zone_management": false, 00:08:35.584 "zone_append": false, 00:08:35.584 "compare": false, 00:08:35.584 "compare_and_write": false, 00:08:35.584 "abort": true, 00:08:35.584 "seek_hole": false, 00:08:35.584 "seek_data": false, 00:08:35.584 "copy": true, 00:08:35.584 "nvme_iov_md": false 00:08:35.584 }, 00:08:35.584 "memory_domains": [ 00:08:35.584 { 00:08:35.584 "dma_device_id": "system", 00:08:35.584 "dma_device_type": 1 00:08:35.584 }, 00:08:35.584 { 00:08:35.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.584 "dma_device_type": 2 00:08:35.584 } 00:08:35.584 ], 00:08:35.584 "driver_specific": {} 00:08:35.584 } 00:08:35.584 ] 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.584 [2024-12-09 14:41:13.599261] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:35.584 [2024-12-09 14:41:13.599311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:35.584 [2024-12-09 14:41:13.599338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:35.584 [2024-12-09 14:41:13.601176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.584 "name": "Existed_Raid", 00:08:35.584 "uuid": "8d0b965d-830e-44c0-9a87-8bc453e56a07", 00:08:35.584 "strip_size_kb": 64, 00:08:35.584 "state": "configuring", 00:08:35.584 "raid_level": "raid0", 00:08:35.584 "superblock": true, 00:08:35.584 "num_base_bdevs": 3, 00:08:35.584 "num_base_bdevs_discovered": 2, 00:08:35.584 "num_base_bdevs_operational": 3, 00:08:35.584 "base_bdevs_list": [ 00:08:35.584 { 00:08:35.584 "name": "BaseBdev1", 00:08:35.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.584 "is_configured": false, 00:08:35.584 "data_offset": 0, 00:08:35.584 "data_size": 0 00:08:35.584 }, 00:08:35.584 { 00:08:35.584 "name": "BaseBdev2", 00:08:35.584 "uuid": "1862061b-f1f3-4c78-940b-910702d3362c", 00:08:35.584 "is_configured": true, 00:08:35.584 "data_offset": 2048, 00:08:35.584 "data_size": 63488 00:08:35.584 }, 00:08:35.584 { 00:08:35.584 "name": "BaseBdev3", 00:08:35.584 "uuid": "44ba539b-23b6-4676-a6b2-b42cf0d3c0c0", 00:08:35.584 "is_configured": true, 00:08:35.584 "data_offset": 2048, 00:08:35.584 "data_size": 63488 00:08:35.584 } 00:08:35.584 ] 00:08:35.584 }' 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.584 14:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.152 14:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:36.152 14:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.152 14:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.152 [2024-12-09 14:41:14.026578] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:36.152 14:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.152 14:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:36.152 14:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.152 14:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.152 14:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:36.152 14:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.152 14:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.152 14:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.152 14:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.152 14:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.152 14:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.152 14:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.152 14:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.152 14:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.152 14:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.152 14:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.152 14:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.152 "name": "Existed_Raid", 00:08:36.152 "uuid": "8d0b965d-830e-44c0-9a87-8bc453e56a07", 00:08:36.152 "strip_size_kb": 64, 00:08:36.152 "state": "configuring", 00:08:36.152 "raid_level": "raid0", 00:08:36.152 "superblock": true, 00:08:36.152 "num_base_bdevs": 3, 00:08:36.152 "num_base_bdevs_discovered": 1, 00:08:36.152 "num_base_bdevs_operational": 3, 00:08:36.152 "base_bdevs_list": [ 00:08:36.152 { 00:08:36.152 "name": "BaseBdev1", 00:08:36.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.152 "is_configured": false, 00:08:36.152 "data_offset": 0, 00:08:36.152 "data_size": 0 00:08:36.152 }, 00:08:36.152 { 00:08:36.152 "name": null, 00:08:36.152 "uuid": "1862061b-f1f3-4c78-940b-910702d3362c", 00:08:36.152 "is_configured": false, 00:08:36.152 "data_offset": 0, 00:08:36.152 "data_size": 63488 00:08:36.152 }, 00:08:36.152 { 00:08:36.152 "name": "BaseBdev3", 00:08:36.152 "uuid": "44ba539b-23b6-4676-a6b2-b42cf0d3c0c0", 00:08:36.152 "is_configured": true, 00:08:36.152 "data_offset": 2048, 00:08:36.152 "data_size": 63488 00:08:36.152 } 00:08:36.152 ] 00:08:36.152 }' 00:08:36.152 14:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.152 14:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.409 14:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.409 14:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.409 14:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.409 14:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:36.409 14:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.668 14:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:36.668 14:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:36.668 14:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.668 14:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.668 [2024-12-09 14:41:14.580650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:36.668 BaseBdev1 00:08:36.668 14:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.668 14:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:36.668 14:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:36.668 14:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:36.668 14:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:36.668 14:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:36.668 14:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:36.668 14:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:36.668 14:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.668 14:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.668 14:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.668 14:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:36.668 14:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.668 14:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.668 [ 00:08:36.668 { 00:08:36.668 "name": "BaseBdev1", 00:08:36.668 "aliases": [ 00:08:36.668 "dd49f5b8-b08f-4674-9168-0aa1fbfd3e04" 00:08:36.668 ], 00:08:36.668 "product_name": "Malloc disk", 00:08:36.668 "block_size": 512, 00:08:36.668 "num_blocks": 65536, 00:08:36.668 "uuid": "dd49f5b8-b08f-4674-9168-0aa1fbfd3e04", 00:08:36.668 "assigned_rate_limits": { 00:08:36.668 "rw_ios_per_sec": 0, 00:08:36.668 "rw_mbytes_per_sec": 0, 00:08:36.668 "r_mbytes_per_sec": 0, 00:08:36.668 "w_mbytes_per_sec": 0 00:08:36.668 }, 00:08:36.668 "claimed": true, 00:08:36.668 "claim_type": "exclusive_write", 00:08:36.668 "zoned": false, 00:08:36.668 "supported_io_types": { 00:08:36.668 "read": true, 00:08:36.668 "write": true, 00:08:36.668 "unmap": true, 00:08:36.668 "flush": true, 00:08:36.668 "reset": true, 00:08:36.668 "nvme_admin": false, 00:08:36.668 "nvme_io": false, 00:08:36.668 "nvme_io_md": false, 00:08:36.668 "write_zeroes": true, 00:08:36.668 "zcopy": true, 00:08:36.668 "get_zone_info": false, 00:08:36.668 "zone_management": false, 00:08:36.668 "zone_append": false, 00:08:36.668 "compare": false, 00:08:36.668 "compare_and_write": false, 00:08:36.668 "abort": true, 00:08:36.668 "seek_hole": false, 00:08:36.668 "seek_data": false, 00:08:36.668 "copy": true, 00:08:36.668 "nvme_iov_md": false 00:08:36.668 }, 00:08:36.668 "memory_domains": [ 00:08:36.668 { 00:08:36.668 "dma_device_id": "system", 00:08:36.668 "dma_device_type": 1 00:08:36.668 }, 00:08:36.668 { 00:08:36.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.668 "dma_device_type": 2 00:08:36.668 } 00:08:36.668 ], 00:08:36.668 "driver_specific": {} 00:08:36.668 } 00:08:36.668 ] 00:08:36.668 14:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.668 14:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:36.668 14:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:36.668 14:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.668 14:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.668 14:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:36.668 14:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.668 14:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.668 14:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.668 14:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.668 14:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.668 14:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.668 14:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.668 14:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.668 14:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.668 14:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.668 14:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.668 14:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.668 "name": "Existed_Raid", 00:08:36.668 "uuid": "8d0b965d-830e-44c0-9a87-8bc453e56a07", 00:08:36.668 "strip_size_kb": 64, 00:08:36.668 "state": "configuring", 00:08:36.668 "raid_level": "raid0", 00:08:36.668 "superblock": true, 00:08:36.668 "num_base_bdevs": 3, 00:08:36.668 "num_base_bdevs_discovered": 2, 00:08:36.668 "num_base_bdevs_operational": 3, 00:08:36.668 "base_bdevs_list": [ 00:08:36.668 { 00:08:36.668 "name": "BaseBdev1", 00:08:36.668 "uuid": "dd49f5b8-b08f-4674-9168-0aa1fbfd3e04", 00:08:36.668 "is_configured": true, 00:08:36.668 "data_offset": 2048, 00:08:36.668 "data_size": 63488 00:08:36.668 }, 00:08:36.668 { 00:08:36.668 "name": null, 00:08:36.668 "uuid": "1862061b-f1f3-4c78-940b-910702d3362c", 00:08:36.668 "is_configured": false, 00:08:36.668 "data_offset": 0, 00:08:36.668 "data_size": 63488 00:08:36.668 }, 00:08:36.668 { 00:08:36.668 "name": "BaseBdev3", 00:08:36.668 "uuid": "44ba539b-23b6-4676-a6b2-b42cf0d3c0c0", 00:08:36.668 "is_configured": true, 00:08:36.668 "data_offset": 2048, 00:08:36.668 "data_size": 63488 00:08:36.668 } 00:08:36.668 ] 00:08:36.668 }' 00:08:36.668 14:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.668 14:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.234 14:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.234 14:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.234 14:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.234 14:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:37.234 14:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.234 14:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:37.234 14:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:37.234 14:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.234 14:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.234 [2024-12-09 14:41:15.163738] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:37.234 14:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.234 14:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:37.234 14:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.234 14:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.234 14:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.234 14:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.234 14:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.234 14:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.234 14:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.234 14:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.234 14:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.234 14:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.234 14:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.234 14:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.234 14:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.234 14:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.234 14:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.234 "name": "Existed_Raid", 00:08:37.234 "uuid": "8d0b965d-830e-44c0-9a87-8bc453e56a07", 00:08:37.234 "strip_size_kb": 64, 00:08:37.234 "state": "configuring", 00:08:37.234 "raid_level": "raid0", 00:08:37.234 "superblock": true, 00:08:37.234 "num_base_bdevs": 3, 00:08:37.234 "num_base_bdevs_discovered": 1, 00:08:37.234 "num_base_bdevs_operational": 3, 00:08:37.234 "base_bdevs_list": [ 00:08:37.234 { 00:08:37.234 "name": "BaseBdev1", 00:08:37.234 "uuid": "dd49f5b8-b08f-4674-9168-0aa1fbfd3e04", 00:08:37.234 "is_configured": true, 00:08:37.234 "data_offset": 2048, 00:08:37.234 "data_size": 63488 00:08:37.234 }, 00:08:37.234 { 00:08:37.234 "name": null, 00:08:37.234 "uuid": "1862061b-f1f3-4c78-940b-910702d3362c", 00:08:37.234 "is_configured": false, 00:08:37.234 "data_offset": 0, 00:08:37.234 "data_size": 63488 00:08:37.234 }, 00:08:37.234 { 00:08:37.234 "name": null, 00:08:37.234 "uuid": "44ba539b-23b6-4676-a6b2-b42cf0d3c0c0", 00:08:37.234 "is_configured": false, 00:08:37.234 "data_offset": 0, 00:08:37.234 "data_size": 63488 00:08:37.234 } 00:08:37.234 ] 00:08:37.234 }' 00:08:37.234 14:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.234 14:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.492 14:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.492 14:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.492 14:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.492 14:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:37.751 14:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.751 14:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:37.751 14:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:37.751 14:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.751 14:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.751 [2024-12-09 14:41:15.658935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:37.751 14:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.751 14:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:37.751 14:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.751 14:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.751 14:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.751 14:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.751 14:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.751 14:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.751 14:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.751 14:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.751 14:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.751 14:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.751 14:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.751 14:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.751 14:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.751 14:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.751 14:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.751 "name": "Existed_Raid", 00:08:37.751 "uuid": "8d0b965d-830e-44c0-9a87-8bc453e56a07", 00:08:37.751 "strip_size_kb": 64, 00:08:37.751 "state": "configuring", 00:08:37.751 "raid_level": "raid0", 00:08:37.751 "superblock": true, 00:08:37.751 "num_base_bdevs": 3, 00:08:37.751 "num_base_bdevs_discovered": 2, 00:08:37.751 "num_base_bdevs_operational": 3, 00:08:37.751 "base_bdevs_list": [ 00:08:37.751 { 00:08:37.751 "name": "BaseBdev1", 00:08:37.751 "uuid": "dd49f5b8-b08f-4674-9168-0aa1fbfd3e04", 00:08:37.751 "is_configured": true, 00:08:37.751 "data_offset": 2048, 00:08:37.751 "data_size": 63488 00:08:37.751 }, 00:08:37.751 { 00:08:37.751 "name": null, 00:08:37.751 "uuid": "1862061b-f1f3-4c78-940b-910702d3362c", 00:08:37.751 "is_configured": false, 00:08:37.751 "data_offset": 0, 00:08:37.751 "data_size": 63488 00:08:37.751 }, 00:08:37.751 { 00:08:37.751 "name": "BaseBdev3", 00:08:37.751 "uuid": "44ba539b-23b6-4676-a6b2-b42cf0d3c0c0", 00:08:37.751 "is_configured": true, 00:08:37.751 "data_offset": 2048, 00:08:37.751 "data_size": 63488 00:08:37.751 } 00:08:37.751 ] 00:08:37.751 }' 00:08:37.751 14:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.751 14:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.010 14:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.010 14:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.010 14:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.356 14:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:38.356 14:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.356 14:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:38.356 14:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:38.356 14:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.356 14:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.356 [2024-12-09 14:41:16.182126] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:38.356 14:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.356 14:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:38.356 14:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.356 14:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.356 14:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:38.356 14:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.356 14:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.356 14:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.356 14:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.356 14:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.356 14:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.356 14:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.356 14:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.356 14:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.356 14:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.356 14:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.356 14:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.357 "name": "Existed_Raid", 00:08:38.357 "uuid": "8d0b965d-830e-44c0-9a87-8bc453e56a07", 00:08:38.357 "strip_size_kb": 64, 00:08:38.357 "state": "configuring", 00:08:38.357 "raid_level": "raid0", 00:08:38.357 "superblock": true, 00:08:38.357 "num_base_bdevs": 3, 00:08:38.357 "num_base_bdevs_discovered": 1, 00:08:38.357 "num_base_bdevs_operational": 3, 00:08:38.357 "base_bdevs_list": [ 00:08:38.357 { 00:08:38.357 "name": null, 00:08:38.357 "uuid": "dd49f5b8-b08f-4674-9168-0aa1fbfd3e04", 00:08:38.357 "is_configured": false, 00:08:38.357 "data_offset": 0, 00:08:38.357 "data_size": 63488 00:08:38.357 }, 00:08:38.357 { 00:08:38.357 "name": null, 00:08:38.357 "uuid": "1862061b-f1f3-4c78-940b-910702d3362c", 00:08:38.357 "is_configured": false, 00:08:38.357 "data_offset": 0, 00:08:38.357 "data_size": 63488 00:08:38.357 }, 00:08:38.357 { 00:08:38.357 "name": "BaseBdev3", 00:08:38.357 "uuid": "44ba539b-23b6-4676-a6b2-b42cf0d3c0c0", 00:08:38.357 "is_configured": true, 00:08:38.357 "data_offset": 2048, 00:08:38.357 "data_size": 63488 00:08:38.357 } 00:08:38.357 ] 00:08:38.357 }' 00:08:38.357 14:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.357 14:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.924 14:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:38.924 14:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.924 14:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.924 14:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.924 14:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.924 14:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:38.924 14:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:38.924 14:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.924 14:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.924 [2024-12-09 14:41:16.789653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:38.924 14:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.924 14:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:38.924 14:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.924 14:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.924 14:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:38.924 14:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.924 14:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.924 14:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.924 14:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.924 14:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.924 14:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.924 14:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.924 14:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.924 14:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.924 14:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.924 14:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.924 14:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.924 "name": "Existed_Raid", 00:08:38.924 "uuid": "8d0b965d-830e-44c0-9a87-8bc453e56a07", 00:08:38.924 "strip_size_kb": 64, 00:08:38.924 "state": "configuring", 00:08:38.924 "raid_level": "raid0", 00:08:38.924 "superblock": true, 00:08:38.924 "num_base_bdevs": 3, 00:08:38.924 "num_base_bdevs_discovered": 2, 00:08:38.924 "num_base_bdevs_operational": 3, 00:08:38.924 "base_bdevs_list": [ 00:08:38.924 { 00:08:38.924 "name": null, 00:08:38.924 "uuid": "dd49f5b8-b08f-4674-9168-0aa1fbfd3e04", 00:08:38.924 "is_configured": false, 00:08:38.924 "data_offset": 0, 00:08:38.924 "data_size": 63488 00:08:38.924 }, 00:08:38.924 { 00:08:38.924 "name": "BaseBdev2", 00:08:38.924 "uuid": "1862061b-f1f3-4c78-940b-910702d3362c", 00:08:38.924 "is_configured": true, 00:08:38.924 "data_offset": 2048, 00:08:38.924 "data_size": 63488 00:08:38.924 }, 00:08:38.924 { 00:08:38.924 "name": "BaseBdev3", 00:08:38.924 "uuid": "44ba539b-23b6-4676-a6b2-b42cf0d3c0c0", 00:08:38.924 "is_configured": true, 00:08:38.924 "data_offset": 2048, 00:08:38.924 "data_size": 63488 00:08:38.924 } 00:08:38.924 ] 00:08:38.924 }' 00:08:38.924 14:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.924 14:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.183 14:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.183 14:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:39.183 14:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.183 14:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.183 14:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.183 14:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:39.183 14:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.183 14:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:39.183 14:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.183 14:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.183 14:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.183 14:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u dd49f5b8-b08f-4674-9168-0aa1fbfd3e04 00:08:39.183 14:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.183 14:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.443 [2024-12-09 14:41:17.342553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:39.443 [2024-12-09 14:41:17.342797] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:39.443 [2024-12-09 14:41:17.342814] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:39.443 [2024-12-09 14:41:17.343054] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:39.443 [2024-12-09 14:41:17.343201] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:39.443 [2024-12-09 14:41:17.343211] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:39.443 [2024-12-09 14:41:17.343350] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:39.443 NewBaseBdev 00:08:39.443 14:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.443 14:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:39.443 14:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:39.443 14:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:39.443 14:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:39.443 14:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:39.443 14:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:39.443 14:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:39.443 14:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.443 14:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.443 14:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.443 14:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:39.443 14:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.443 14:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.443 [ 00:08:39.443 { 00:08:39.443 "name": "NewBaseBdev", 00:08:39.443 "aliases": [ 00:08:39.443 "dd49f5b8-b08f-4674-9168-0aa1fbfd3e04" 00:08:39.443 ], 00:08:39.443 "product_name": "Malloc disk", 00:08:39.443 "block_size": 512, 00:08:39.443 "num_blocks": 65536, 00:08:39.443 "uuid": "dd49f5b8-b08f-4674-9168-0aa1fbfd3e04", 00:08:39.443 "assigned_rate_limits": { 00:08:39.443 "rw_ios_per_sec": 0, 00:08:39.443 "rw_mbytes_per_sec": 0, 00:08:39.443 "r_mbytes_per_sec": 0, 00:08:39.443 "w_mbytes_per_sec": 0 00:08:39.443 }, 00:08:39.443 "claimed": true, 00:08:39.443 "claim_type": "exclusive_write", 00:08:39.443 "zoned": false, 00:08:39.443 "supported_io_types": { 00:08:39.443 "read": true, 00:08:39.443 "write": true, 00:08:39.443 "unmap": true, 00:08:39.443 "flush": true, 00:08:39.443 "reset": true, 00:08:39.443 "nvme_admin": false, 00:08:39.443 "nvme_io": false, 00:08:39.443 "nvme_io_md": false, 00:08:39.443 "write_zeroes": true, 00:08:39.443 "zcopy": true, 00:08:39.443 "get_zone_info": false, 00:08:39.443 "zone_management": false, 00:08:39.443 "zone_append": false, 00:08:39.443 "compare": false, 00:08:39.443 "compare_and_write": false, 00:08:39.443 "abort": true, 00:08:39.443 "seek_hole": false, 00:08:39.443 "seek_data": false, 00:08:39.443 "copy": true, 00:08:39.443 "nvme_iov_md": false 00:08:39.443 }, 00:08:39.443 "memory_domains": [ 00:08:39.443 { 00:08:39.443 "dma_device_id": "system", 00:08:39.443 "dma_device_type": 1 00:08:39.443 }, 00:08:39.443 { 00:08:39.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.443 "dma_device_type": 2 00:08:39.443 } 00:08:39.443 ], 00:08:39.443 "driver_specific": {} 00:08:39.443 } 00:08:39.443 ] 00:08:39.443 14:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.443 14:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:39.443 14:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:39.443 14:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.443 14:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:39.443 14:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:39.443 14:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.443 14:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.443 14:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.443 14:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.443 14:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.443 14:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.443 14:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.443 14:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.443 14:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.443 14:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.443 14:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.443 14:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.443 "name": "Existed_Raid", 00:08:39.443 "uuid": "8d0b965d-830e-44c0-9a87-8bc453e56a07", 00:08:39.443 "strip_size_kb": 64, 00:08:39.443 "state": "online", 00:08:39.443 "raid_level": "raid0", 00:08:39.443 "superblock": true, 00:08:39.443 "num_base_bdevs": 3, 00:08:39.443 "num_base_bdevs_discovered": 3, 00:08:39.443 "num_base_bdevs_operational": 3, 00:08:39.443 "base_bdevs_list": [ 00:08:39.443 { 00:08:39.443 "name": "NewBaseBdev", 00:08:39.443 "uuid": "dd49f5b8-b08f-4674-9168-0aa1fbfd3e04", 00:08:39.443 "is_configured": true, 00:08:39.443 "data_offset": 2048, 00:08:39.443 "data_size": 63488 00:08:39.443 }, 00:08:39.443 { 00:08:39.443 "name": "BaseBdev2", 00:08:39.443 "uuid": "1862061b-f1f3-4c78-940b-910702d3362c", 00:08:39.443 "is_configured": true, 00:08:39.443 "data_offset": 2048, 00:08:39.443 "data_size": 63488 00:08:39.443 }, 00:08:39.443 { 00:08:39.443 "name": "BaseBdev3", 00:08:39.443 "uuid": "44ba539b-23b6-4676-a6b2-b42cf0d3c0c0", 00:08:39.443 "is_configured": true, 00:08:39.443 "data_offset": 2048, 00:08:39.443 "data_size": 63488 00:08:39.443 } 00:08:39.443 ] 00:08:39.443 }' 00:08:39.443 14:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.443 14:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.702 14:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:39.702 14:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:39.702 14:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:39.702 14:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:39.702 14:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:39.961 14:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:39.961 14:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:39.961 14:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:39.961 14:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.961 14:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.961 [2024-12-09 14:41:17.834136] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:39.961 14:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.961 14:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:39.961 "name": "Existed_Raid", 00:08:39.961 "aliases": [ 00:08:39.961 "8d0b965d-830e-44c0-9a87-8bc453e56a07" 00:08:39.961 ], 00:08:39.961 "product_name": "Raid Volume", 00:08:39.961 "block_size": 512, 00:08:39.961 "num_blocks": 190464, 00:08:39.961 "uuid": "8d0b965d-830e-44c0-9a87-8bc453e56a07", 00:08:39.961 "assigned_rate_limits": { 00:08:39.961 "rw_ios_per_sec": 0, 00:08:39.961 "rw_mbytes_per_sec": 0, 00:08:39.961 "r_mbytes_per_sec": 0, 00:08:39.961 "w_mbytes_per_sec": 0 00:08:39.961 }, 00:08:39.961 "claimed": false, 00:08:39.961 "zoned": false, 00:08:39.961 "supported_io_types": { 00:08:39.961 "read": true, 00:08:39.961 "write": true, 00:08:39.961 "unmap": true, 00:08:39.961 "flush": true, 00:08:39.961 "reset": true, 00:08:39.961 "nvme_admin": false, 00:08:39.961 "nvme_io": false, 00:08:39.961 "nvme_io_md": false, 00:08:39.961 "write_zeroes": true, 00:08:39.961 "zcopy": false, 00:08:39.961 "get_zone_info": false, 00:08:39.961 "zone_management": false, 00:08:39.961 "zone_append": false, 00:08:39.961 "compare": false, 00:08:39.961 "compare_and_write": false, 00:08:39.961 "abort": false, 00:08:39.961 "seek_hole": false, 00:08:39.961 "seek_data": false, 00:08:39.961 "copy": false, 00:08:39.961 "nvme_iov_md": false 00:08:39.961 }, 00:08:39.961 "memory_domains": [ 00:08:39.961 { 00:08:39.961 "dma_device_id": "system", 00:08:39.961 "dma_device_type": 1 00:08:39.961 }, 00:08:39.961 { 00:08:39.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.961 "dma_device_type": 2 00:08:39.961 }, 00:08:39.961 { 00:08:39.961 "dma_device_id": "system", 00:08:39.961 "dma_device_type": 1 00:08:39.961 }, 00:08:39.961 { 00:08:39.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.961 "dma_device_type": 2 00:08:39.961 }, 00:08:39.961 { 00:08:39.961 "dma_device_id": "system", 00:08:39.961 "dma_device_type": 1 00:08:39.961 }, 00:08:39.961 { 00:08:39.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.961 "dma_device_type": 2 00:08:39.961 } 00:08:39.961 ], 00:08:39.961 "driver_specific": { 00:08:39.961 "raid": { 00:08:39.961 "uuid": "8d0b965d-830e-44c0-9a87-8bc453e56a07", 00:08:39.961 "strip_size_kb": 64, 00:08:39.961 "state": "online", 00:08:39.961 "raid_level": "raid0", 00:08:39.961 "superblock": true, 00:08:39.961 "num_base_bdevs": 3, 00:08:39.961 "num_base_bdevs_discovered": 3, 00:08:39.961 "num_base_bdevs_operational": 3, 00:08:39.961 "base_bdevs_list": [ 00:08:39.961 { 00:08:39.961 "name": "NewBaseBdev", 00:08:39.961 "uuid": "dd49f5b8-b08f-4674-9168-0aa1fbfd3e04", 00:08:39.961 "is_configured": true, 00:08:39.961 "data_offset": 2048, 00:08:39.961 "data_size": 63488 00:08:39.961 }, 00:08:39.961 { 00:08:39.961 "name": "BaseBdev2", 00:08:39.962 "uuid": "1862061b-f1f3-4c78-940b-910702d3362c", 00:08:39.962 "is_configured": true, 00:08:39.962 "data_offset": 2048, 00:08:39.962 "data_size": 63488 00:08:39.962 }, 00:08:39.962 { 00:08:39.962 "name": "BaseBdev3", 00:08:39.962 "uuid": "44ba539b-23b6-4676-a6b2-b42cf0d3c0c0", 00:08:39.962 "is_configured": true, 00:08:39.962 "data_offset": 2048, 00:08:39.962 "data_size": 63488 00:08:39.962 } 00:08:39.962 ] 00:08:39.962 } 00:08:39.962 } 00:08:39.962 }' 00:08:39.962 14:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:39.962 14:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:39.962 BaseBdev2 00:08:39.962 BaseBdev3' 00:08:39.962 14:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.962 14:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:39.962 14:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:39.962 14:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:39.962 14:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.962 14:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.962 14:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.962 14:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.962 14:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:39.962 14:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:39.962 14:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:39.962 14:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:39.962 14:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.962 14:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.962 14:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.962 14:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.962 14:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:39.962 14:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:39.962 14:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:39.962 14:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.962 14:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:39.962 14:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.962 14:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.221 14:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.221 14:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.221 14:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.221 14:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:40.221 14:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.221 14:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.221 [2024-12-09 14:41:18.113332] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:40.221 [2024-12-09 14:41:18.113366] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:40.221 [2024-12-09 14:41:18.113441] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:40.221 [2024-12-09 14:41:18.113496] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:40.221 [2024-12-09 14:41:18.113508] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:40.221 14:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.221 14:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 65729 00:08:40.221 14:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 65729 ']' 00:08:40.221 14:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 65729 00:08:40.221 14:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:40.221 14:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:40.221 14:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65729 00:08:40.221 14:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:40.221 14:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:40.221 killing process with pid 65729 00:08:40.221 14:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65729' 00:08:40.221 14:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 65729 00:08:40.221 [2024-12-09 14:41:18.160801] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:40.221 14:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 65729 00:08:40.480 [2024-12-09 14:41:18.462504] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:41.857 14:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:41.857 00:08:41.857 real 0m10.630s 00:08:41.857 user 0m16.930s 00:08:41.857 sys 0m1.841s 00:08:41.857 14:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.857 14:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.857 ************************************ 00:08:41.857 END TEST raid_state_function_test_sb 00:08:41.857 ************************************ 00:08:41.857 14:41:19 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:41.857 14:41:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:41.857 14:41:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.857 14:41:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:41.857 ************************************ 00:08:41.857 START TEST raid_superblock_test 00:08:41.857 ************************************ 00:08:41.857 14:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:08:41.857 14:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:41.857 14:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:41.857 14:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:41.857 14:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:41.857 14:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:41.857 14:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:41.857 14:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:41.857 14:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:41.857 14:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:41.857 14:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:41.857 14:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:41.857 14:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:41.857 14:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:41.857 14:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:41.857 14:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:41.857 14:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:41.857 14:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66349 00:08:41.857 14:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:41.857 14:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66349 00:08:41.857 14:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66349 ']' 00:08:41.857 14:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.857 14:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:41.857 14:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.857 14:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:41.857 14:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.857 [2024-12-09 14:41:19.743695] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:08:41.857 [2024-12-09 14:41:19.743832] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66349 ] 00:08:41.857 [2024-12-09 14:41:19.918789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.116 [2024-12-09 14:41:20.035458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.373 [2024-12-09 14:41:20.236465] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:42.374 [2024-12-09 14:41:20.236523] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:42.631 14:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:42.631 14:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:42.631 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:42.631 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:42.631 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:42.631 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:42.631 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:42.631 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:42.631 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:42.631 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:42.631 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:42.631 14:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.631 14:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.631 malloc1 00:08:42.631 14:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.631 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:42.631 14:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.631 14:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.631 [2024-12-09 14:41:20.670766] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:42.631 [2024-12-09 14:41:20.670863] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.631 [2024-12-09 14:41:20.670901] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:42.631 [2024-12-09 14:41:20.670930] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.631 [2024-12-09 14:41:20.672973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.631 [2024-12-09 14:41:20.673042] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:42.631 pt1 00:08:42.631 14:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.631 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:42.631 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:42.631 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:42.631 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:42.631 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:42.631 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:42.631 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:42.631 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:42.631 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:42.631 14:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.631 14:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.631 malloc2 00:08:42.632 14:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.632 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:42.632 14:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.632 14:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.632 [2024-12-09 14:41:20.728372] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:42.632 [2024-12-09 14:41:20.728470] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.632 [2024-12-09 14:41:20.728501] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:42.632 [2024-12-09 14:41:20.728510] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.632 [2024-12-09 14:41:20.730626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.632 [2024-12-09 14:41:20.730661] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:42.632 pt2 00:08:42.632 14:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.632 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:42.632 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:42.632 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:42.632 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:42.632 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:42.632 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:42.632 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:42.632 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:42.632 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:42.632 14:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.632 14:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.891 malloc3 00:08:42.891 14:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.891 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:42.891 14:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.891 14:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.891 [2024-12-09 14:41:20.796875] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:42.891 [2024-12-09 14:41:20.796992] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.891 [2024-12-09 14:41:20.797034] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:42.891 [2024-12-09 14:41:20.797063] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.891 [2024-12-09 14:41:20.799343] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.891 [2024-12-09 14:41:20.799413] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:42.891 pt3 00:08:42.891 14:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.891 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:42.891 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:42.891 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:42.891 14:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.891 14:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.891 [2024-12-09 14:41:20.808881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:42.891 [2024-12-09 14:41:20.810716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:42.891 [2024-12-09 14:41:20.810840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:42.891 [2024-12-09 14:41:20.811053] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:42.891 [2024-12-09 14:41:20.811109] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:42.891 [2024-12-09 14:41:20.811412] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:42.891 [2024-12-09 14:41:20.811632] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:42.891 [2024-12-09 14:41:20.811673] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:42.891 [2024-12-09 14:41:20.811907] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:42.891 14:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.891 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:42.891 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:42.891 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:42.891 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.891 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.891 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.891 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.891 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.891 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.891 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.891 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.891 14:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.891 14:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.891 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:42.891 14:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.891 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.891 "name": "raid_bdev1", 00:08:42.891 "uuid": "87b73315-45db-4ecb-ba50-5dfa6b8d7a95", 00:08:42.891 "strip_size_kb": 64, 00:08:42.891 "state": "online", 00:08:42.891 "raid_level": "raid0", 00:08:42.891 "superblock": true, 00:08:42.891 "num_base_bdevs": 3, 00:08:42.891 "num_base_bdevs_discovered": 3, 00:08:42.891 "num_base_bdevs_operational": 3, 00:08:42.891 "base_bdevs_list": [ 00:08:42.891 { 00:08:42.891 "name": "pt1", 00:08:42.891 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:42.891 "is_configured": true, 00:08:42.891 "data_offset": 2048, 00:08:42.891 "data_size": 63488 00:08:42.891 }, 00:08:42.891 { 00:08:42.891 "name": "pt2", 00:08:42.891 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:42.891 "is_configured": true, 00:08:42.891 "data_offset": 2048, 00:08:42.891 "data_size": 63488 00:08:42.891 }, 00:08:42.891 { 00:08:42.891 "name": "pt3", 00:08:42.891 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:42.891 "is_configured": true, 00:08:42.891 "data_offset": 2048, 00:08:42.891 "data_size": 63488 00:08:42.891 } 00:08:42.891 ] 00:08:42.891 }' 00:08:42.891 14:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.891 14:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.460 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:43.460 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:43.460 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:43.460 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:43.460 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:43.460 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:43.460 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:43.460 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.460 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.460 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:43.460 [2024-12-09 14:41:21.304341] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:43.460 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.460 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:43.460 "name": "raid_bdev1", 00:08:43.461 "aliases": [ 00:08:43.461 "87b73315-45db-4ecb-ba50-5dfa6b8d7a95" 00:08:43.461 ], 00:08:43.461 "product_name": "Raid Volume", 00:08:43.461 "block_size": 512, 00:08:43.461 "num_blocks": 190464, 00:08:43.461 "uuid": "87b73315-45db-4ecb-ba50-5dfa6b8d7a95", 00:08:43.461 "assigned_rate_limits": { 00:08:43.461 "rw_ios_per_sec": 0, 00:08:43.461 "rw_mbytes_per_sec": 0, 00:08:43.461 "r_mbytes_per_sec": 0, 00:08:43.461 "w_mbytes_per_sec": 0 00:08:43.461 }, 00:08:43.461 "claimed": false, 00:08:43.461 "zoned": false, 00:08:43.461 "supported_io_types": { 00:08:43.461 "read": true, 00:08:43.461 "write": true, 00:08:43.461 "unmap": true, 00:08:43.461 "flush": true, 00:08:43.461 "reset": true, 00:08:43.461 "nvme_admin": false, 00:08:43.461 "nvme_io": false, 00:08:43.461 "nvme_io_md": false, 00:08:43.461 "write_zeroes": true, 00:08:43.461 "zcopy": false, 00:08:43.461 "get_zone_info": false, 00:08:43.461 "zone_management": false, 00:08:43.461 "zone_append": false, 00:08:43.461 "compare": false, 00:08:43.461 "compare_and_write": false, 00:08:43.461 "abort": false, 00:08:43.461 "seek_hole": false, 00:08:43.461 "seek_data": false, 00:08:43.461 "copy": false, 00:08:43.461 "nvme_iov_md": false 00:08:43.461 }, 00:08:43.461 "memory_domains": [ 00:08:43.461 { 00:08:43.461 "dma_device_id": "system", 00:08:43.461 "dma_device_type": 1 00:08:43.461 }, 00:08:43.461 { 00:08:43.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.461 "dma_device_type": 2 00:08:43.461 }, 00:08:43.461 { 00:08:43.461 "dma_device_id": "system", 00:08:43.461 "dma_device_type": 1 00:08:43.461 }, 00:08:43.461 { 00:08:43.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.461 "dma_device_type": 2 00:08:43.461 }, 00:08:43.461 { 00:08:43.461 "dma_device_id": "system", 00:08:43.461 "dma_device_type": 1 00:08:43.461 }, 00:08:43.461 { 00:08:43.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.461 "dma_device_type": 2 00:08:43.461 } 00:08:43.461 ], 00:08:43.461 "driver_specific": { 00:08:43.461 "raid": { 00:08:43.461 "uuid": "87b73315-45db-4ecb-ba50-5dfa6b8d7a95", 00:08:43.461 "strip_size_kb": 64, 00:08:43.461 "state": "online", 00:08:43.461 "raid_level": "raid0", 00:08:43.461 "superblock": true, 00:08:43.461 "num_base_bdevs": 3, 00:08:43.461 "num_base_bdevs_discovered": 3, 00:08:43.461 "num_base_bdevs_operational": 3, 00:08:43.461 "base_bdevs_list": [ 00:08:43.461 { 00:08:43.461 "name": "pt1", 00:08:43.461 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:43.461 "is_configured": true, 00:08:43.461 "data_offset": 2048, 00:08:43.461 "data_size": 63488 00:08:43.461 }, 00:08:43.461 { 00:08:43.461 "name": "pt2", 00:08:43.461 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:43.461 "is_configured": true, 00:08:43.461 "data_offset": 2048, 00:08:43.461 "data_size": 63488 00:08:43.461 }, 00:08:43.461 { 00:08:43.461 "name": "pt3", 00:08:43.461 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:43.461 "is_configured": true, 00:08:43.461 "data_offset": 2048, 00:08:43.461 "data_size": 63488 00:08:43.461 } 00:08:43.461 ] 00:08:43.461 } 00:08:43.461 } 00:08:43.461 }' 00:08:43.461 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:43.461 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:43.461 pt2 00:08:43.461 pt3' 00:08:43.461 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.461 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:43.461 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:43.461 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:43.461 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.461 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.461 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.461 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.461 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:43.461 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:43.461 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:43.461 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.461 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:43.461 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.461 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.461 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.461 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:43.461 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:43.461 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:43.461 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:43.461 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.461 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.461 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.461 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.461 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:43.461 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.722 [2024-12-09 14:41:21.587976] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=87b73315-45db-4ecb-ba50-5dfa6b8d7a95 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 87b73315-45db-4ecb-ba50-5dfa6b8d7a95 ']' 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.722 [2024-12-09 14:41:21.635542] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:43.722 [2024-12-09 14:41:21.635573] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:43.722 [2024-12-09 14:41:21.635672] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:43.722 [2024-12-09 14:41:21.635735] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:43.722 [2024-12-09 14:41:21.635745] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:43.722 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:43.723 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:43.723 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.723 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.723 [2024-12-09 14:41:21.759343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:43.723 [2024-12-09 14:41:21.761225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:43.723 [2024-12-09 14:41:21.761343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:43.723 [2024-12-09 14:41:21.761400] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:43.723 [2024-12-09 14:41:21.761448] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:43.723 [2024-12-09 14:41:21.761468] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:43.723 [2024-12-09 14:41:21.761485] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:43.723 [2024-12-09 14:41:21.761496] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:43.723 request: 00:08:43.723 { 00:08:43.723 "name": "raid_bdev1", 00:08:43.723 "raid_level": "raid0", 00:08:43.723 "base_bdevs": [ 00:08:43.723 "malloc1", 00:08:43.723 "malloc2", 00:08:43.723 "malloc3" 00:08:43.723 ], 00:08:43.723 "strip_size_kb": 64, 00:08:43.723 "superblock": false, 00:08:43.723 "method": "bdev_raid_create", 00:08:43.723 "req_id": 1 00:08:43.723 } 00:08:43.723 Got JSON-RPC error response 00:08:43.723 response: 00:08:43.723 { 00:08:43.723 "code": -17, 00:08:43.723 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:43.723 } 00:08:43.723 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:43.723 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:43.723 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:43.723 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:43.723 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:43.723 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.723 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.723 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.723 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:43.723 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.723 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:43.723 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:43.723 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:43.723 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.723 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.723 [2024-12-09 14:41:21.827177] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:43.723 [2024-12-09 14:41:21.827264] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.723 [2024-12-09 14:41:21.827299] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:43.723 [2024-12-09 14:41:21.827326] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.723 [2024-12-09 14:41:21.829457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.723 [2024-12-09 14:41:21.829554] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:43.723 [2024-12-09 14:41:21.829687] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:43.723 [2024-12-09 14:41:21.829792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:43.723 pt1 00:08:43.723 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.723 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:43.723 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:43.723 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.723 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.723 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.723 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.723 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.723 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.723 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.723 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.723 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.723 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.723 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.723 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:43.985 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.985 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.985 "name": "raid_bdev1", 00:08:43.985 "uuid": "87b73315-45db-4ecb-ba50-5dfa6b8d7a95", 00:08:43.985 "strip_size_kb": 64, 00:08:43.985 "state": "configuring", 00:08:43.985 "raid_level": "raid0", 00:08:43.985 "superblock": true, 00:08:43.985 "num_base_bdevs": 3, 00:08:43.985 "num_base_bdevs_discovered": 1, 00:08:43.985 "num_base_bdevs_operational": 3, 00:08:43.985 "base_bdevs_list": [ 00:08:43.985 { 00:08:43.985 "name": "pt1", 00:08:43.985 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:43.985 "is_configured": true, 00:08:43.985 "data_offset": 2048, 00:08:43.985 "data_size": 63488 00:08:43.985 }, 00:08:43.985 { 00:08:43.985 "name": null, 00:08:43.985 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:43.985 "is_configured": false, 00:08:43.985 "data_offset": 2048, 00:08:43.985 "data_size": 63488 00:08:43.985 }, 00:08:43.985 { 00:08:43.985 "name": null, 00:08:43.985 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:43.985 "is_configured": false, 00:08:43.985 "data_offset": 2048, 00:08:43.985 "data_size": 63488 00:08:43.985 } 00:08:43.985 ] 00:08:43.985 }' 00:08:43.985 14:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.985 14:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.248 14:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:44.248 14:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:44.248 14:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.248 14:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.248 [2024-12-09 14:41:22.262528] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:44.248 [2024-12-09 14:41:22.262613] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.248 [2024-12-09 14:41:22.262642] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:44.248 [2024-12-09 14:41:22.262652] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.248 [2024-12-09 14:41:22.263145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.248 [2024-12-09 14:41:22.263170] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:44.248 [2024-12-09 14:41:22.263259] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:44.248 [2024-12-09 14:41:22.263290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:44.248 pt2 00:08:44.248 14:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.248 14:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:44.248 14:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.248 14:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.248 [2024-12-09 14:41:22.274552] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:44.248 14:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.248 14:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:44.248 14:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:44.248 14:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.248 14:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:44.248 14:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.248 14:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.248 14:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.248 14:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.248 14:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.248 14:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.248 14:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.248 14:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:44.248 14:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.248 14:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.248 14:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.248 14:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.248 "name": "raid_bdev1", 00:08:44.248 "uuid": "87b73315-45db-4ecb-ba50-5dfa6b8d7a95", 00:08:44.248 "strip_size_kb": 64, 00:08:44.248 "state": "configuring", 00:08:44.248 "raid_level": "raid0", 00:08:44.248 "superblock": true, 00:08:44.248 "num_base_bdevs": 3, 00:08:44.248 "num_base_bdevs_discovered": 1, 00:08:44.248 "num_base_bdevs_operational": 3, 00:08:44.248 "base_bdevs_list": [ 00:08:44.248 { 00:08:44.248 "name": "pt1", 00:08:44.248 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:44.248 "is_configured": true, 00:08:44.248 "data_offset": 2048, 00:08:44.248 "data_size": 63488 00:08:44.248 }, 00:08:44.248 { 00:08:44.248 "name": null, 00:08:44.248 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:44.248 "is_configured": false, 00:08:44.248 "data_offset": 0, 00:08:44.248 "data_size": 63488 00:08:44.248 }, 00:08:44.248 { 00:08:44.249 "name": null, 00:08:44.249 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:44.249 "is_configured": false, 00:08:44.249 "data_offset": 2048, 00:08:44.249 "data_size": 63488 00:08:44.249 } 00:08:44.249 ] 00:08:44.249 }' 00:08:44.249 14:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.249 14:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.817 14:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:44.817 14:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:44.817 14:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:44.817 14:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.817 14:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.817 [2024-12-09 14:41:22.725715] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:44.817 [2024-12-09 14:41:22.725832] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.817 [2024-12-09 14:41:22.725868] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:44.817 [2024-12-09 14:41:22.725897] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.817 [2024-12-09 14:41:22.726406] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.817 [2024-12-09 14:41:22.726470] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:44.817 [2024-12-09 14:41:22.726596] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:44.817 [2024-12-09 14:41:22.726652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:44.817 pt2 00:08:44.817 14:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.817 14:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:44.817 14:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:44.817 14:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:44.817 14:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.817 14:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.817 [2024-12-09 14:41:22.737690] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:44.817 [2024-12-09 14:41:22.737773] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.817 [2024-12-09 14:41:22.737802] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:44.817 [2024-12-09 14:41:22.737829] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.817 [2024-12-09 14:41:22.738186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.817 [2024-12-09 14:41:22.738243] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:44.817 [2024-12-09 14:41:22.738324] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:44.817 [2024-12-09 14:41:22.738379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:44.818 [2024-12-09 14:41:22.738532] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:44.818 [2024-12-09 14:41:22.738583] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:44.818 [2024-12-09 14:41:22.738846] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:44.818 [2024-12-09 14:41:22.739042] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:44.818 [2024-12-09 14:41:22.739080] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:44.818 [2024-12-09 14:41:22.739257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:44.818 pt3 00:08:44.818 14:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.818 14:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:44.818 14:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:44.818 14:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:44.818 14:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:44.818 14:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:44.818 14:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:44.818 14:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.818 14:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.818 14:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.818 14:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.818 14:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.818 14:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.818 14:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.818 14:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.818 14:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.818 14:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:44.818 14:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.818 14:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.818 "name": "raid_bdev1", 00:08:44.818 "uuid": "87b73315-45db-4ecb-ba50-5dfa6b8d7a95", 00:08:44.818 "strip_size_kb": 64, 00:08:44.818 "state": "online", 00:08:44.818 "raid_level": "raid0", 00:08:44.818 "superblock": true, 00:08:44.818 "num_base_bdevs": 3, 00:08:44.818 "num_base_bdevs_discovered": 3, 00:08:44.818 "num_base_bdevs_operational": 3, 00:08:44.818 "base_bdevs_list": [ 00:08:44.818 { 00:08:44.818 "name": "pt1", 00:08:44.818 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:44.818 "is_configured": true, 00:08:44.818 "data_offset": 2048, 00:08:44.818 "data_size": 63488 00:08:44.818 }, 00:08:44.818 { 00:08:44.818 "name": "pt2", 00:08:44.818 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:44.818 "is_configured": true, 00:08:44.818 "data_offset": 2048, 00:08:44.818 "data_size": 63488 00:08:44.818 }, 00:08:44.818 { 00:08:44.818 "name": "pt3", 00:08:44.818 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:44.818 "is_configured": true, 00:08:44.818 "data_offset": 2048, 00:08:44.818 "data_size": 63488 00:08:44.818 } 00:08:44.818 ] 00:08:44.818 }' 00:08:44.818 14:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.818 14:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.078 14:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:45.078 14:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:45.078 14:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:45.078 14:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:45.078 14:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:45.078 14:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:45.337 14:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:45.337 14:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:45.337 14:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.337 14:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.337 [2024-12-09 14:41:23.209204] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:45.337 14:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.337 14:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:45.337 "name": "raid_bdev1", 00:08:45.337 "aliases": [ 00:08:45.337 "87b73315-45db-4ecb-ba50-5dfa6b8d7a95" 00:08:45.337 ], 00:08:45.337 "product_name": "Raid Volume", 00:08:45.337 "block_size": 512, 00:08:45.337 "num_blocks": 190464, 00:08:45.337 "uuid": "87b73315-45db-4ecb-ba50-5dfa6b8d7a95", 00:08:45.337 "assigned_rate_limits": { 00:08:45.337 "rw_ios_per_sec": 0, 00:08:45.337 "rw_mbytes_per_sec": 0, 00:08:45.337 "r_mbytes_per_sec": 0, 00:08:45.337 "w_mbytes_per_sec": 0 00:08:45.337 }, 00:08:45.337 "claimed": false, 00:08:45.337 "zoned": false, 00:08:45.337 "supported_io_types": { 00:08:45.337 "read": true, 00:08:45.337 "write": true, 00:08:45.337 "unmap": true, 00:08:45.337 "flush": true, 00:08:45.337 "reset": true, 00:08:45.337 "nvme_admin": false, 00:08:45.337 "nvme_io": false, 00:08:45.337 "nvme_io_md": false, 00:08:45.337 "write_zeroes": true, 00:08:45.337 "zcopy": false, 00:08:45.337 "get_zone_info": false, 00:08:45.337 "zone_management": false, 00:08:45.337 "zone_append": false, 00:08:45.337 "compare": false, 00:08:45.337 "compare_and_write": false, 00:08:45.337 "abort": false, 00:08:45.337 "seek_hole": false, 00:08:45.337 "seek_data": false, 00:08:45.337 "copy": false, 00:08:45.337 "nvme_iov_md": false 00:08:45.337 }, 00:08:45.337 "memory_domains": [ 00:08:45.337 { 00:08:45.337 "dma_device_id": "system", 00:08:45.337 "dma_device_type": 1 00:08:45.337 }, 00:08:45.337 { 00:08:45.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.337 "dma_device_type": 2 00:08:45.337 }, 00:08:45.337 { 00:08:45.337 "dma_device_id": "system", 00:08:45.337 "dma_device_type": 1 00:08:45.337 }, 00:08:45.337 { 00:08:45.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.337 "dma_device_type": 2 00:08:45.337 }, 00:08:45.337 { 00:08:45.337 "dma_device_id": "system", 00:08:45.337 "dma_device_type": 1 00:08:45.337 }, 00:08:45.337 { 00:08:45.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.337 "dma_device_type": 2 00:08:45.337 } 00:08:45.337 ], 00:08:45.338 "driver_specific": { 00:08:45.338 "raid": { 00:08:45.338 "uuid": "87b73315-45db-4ecb-ba50-5dfa6b8d7a95", 00:08:45.338 "strip_size_kb": 64, 00:08:45.338 "state": "online", 00:08:45.338 "raid_level": "raid0", 00:08:45.338 "superblock": true, 00:08:45.338 "num_base_bdevs": 3, 00:08:45.338 "num_base_bdevs_discovered": 3, 00:08:45.338 "num_base_bdevs_operational": 3, 00:08:45.338 "base_bdevs_list": [ 00:08:45.338 { 00:08:45.338 "name": "pt1", 00:08:45.338 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:45.338 "is_configured": true, 00:08:45.338 "data_offset": 2048, 00:08:45.338 "data_size": 63488 00:08:45.338 }, 00:08:45.338 { 00:08:45.338 "name": "pt2", 00:08:45.338 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:45.338 "is_configured": true, 00:08:45.338 "data_offset": 2048, 00:08:45.338 "data_size": 63488 00:08:45.338 }, 00:08:45.338 { 00:08:45.338 "name": "pt3", 00:08:45.338 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:45.338 "is_configured": true, 00:08:45.338 "data_offset": 2048, 00:08:45.338 "data_size": 63488 00:08:45.338 } 00:08:45.338 ] 00:08:45.338 } 00:08:45.338 } 00:08:45.338 }' 00:08:45.338 14:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:45.338 14:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:45.338 pt2 00:08:45.338 pt3' 00:08:45.338 14:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.338 14:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:45.338 14:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.338 14:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.338 14:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:45.338 14:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.338 14:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.338 14:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.338 14:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.338 14:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.338 14:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.338 14:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:45.338 14:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.338 14:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.338 14:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.338 14:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.338 14:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.338 14:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.338 14:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.338 14:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.338 14:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:45.338 14:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.338 14:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.338 14:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.598 14:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.598 14:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.598 14:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:45.598 14:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.598 14:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.598 14:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:45.598 [2024-12-09 14:41:23.480747] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:45.598 14:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.598 14:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 87b73315-45db-4ecb-ba50-5dfa6b8d7a95 '!=' 87b73315-45db-4ecb-ba50-5dfa6b8d7a95 ']' 00:08:45.598 14:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:45.598 14:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:45.598 14:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:45.598 14:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66349 00:08:45.598 14:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66349 ']' 00:08:45.598 14:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66349 00:08:45.598 14:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:45.598 14:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:45.598 14:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66349 00:08:45.598 killing process with pid 66349 00:08:45.598 14:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:45.598 14:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:45.598 14:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66349' 00:08:45.598 14:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66349 00:08:45.598 [2024-12-09 14:41:23.564416] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:45.598 14:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66349 00:08:45.598 [2024-12-09 14:41:23.564514] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:45.598 [2024-12-09 14:41:23.564573] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:45.598 [2024-12-09 14:41:23.564612] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:45.857 [2024-12-09 14:41:23.865695] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:47.238 14:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:47.238 00:08:47.238 real 0m5.329s 00:08:47.238 user 0m7.715s 00:08:47.238 sys 0m0.829s 00:08:47.238 14:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.238 14:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.238 ************************************ 00:08:47.238 END TEST raid_superblock_test 00:08:47.238 ************************************ 00:08:47.238 14:41:25 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:47.238 14:41:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:47.238 14:41:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.238 14:41:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:47.238 ************************************ 00:08:47.238 START TEST raid_read_error_test 00:08:47.238 ************************************ 00:08:47.238 14:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:08:47.238 14:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:47.238 14:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:47.238 14:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:47.238 14:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:47.238 14:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.238 14:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:47.238 14:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:47.238 14:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.238 14:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:47.238 14:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:47.238 14:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.238 14:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:47.238 14:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:47.238 14:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.238 14:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:47.238 14:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:47.238 14:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:47.238 14:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:47.238 14:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:47.238 14:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:47.238 14:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:47.238 14:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:47.239 14:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:47.239 14:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:47.239 14:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:47.239 14:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qoR0K8vAS7 00:08:47.239 14:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=66602 00:08:47.239 14:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:47.239 14:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 66602 00:08:47.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.239 14:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 66602 ']' 00:08:47.239 14:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.239 14:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:47.239 14:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.239 14:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:47.239 14:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.239 [2024-12-09 14:41:25.162053] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:08:47.239 [2024-12-09 14:41:25.162172] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66602 ] 00:08:47.239 [2024-12-09 14:41:25.338137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.497 [2024-12-09 14:41:25.452013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.756 [2024-12-09 14:41:25.650341] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.756 [2024-12-09 14:41:25.650499] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.016 14:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:48.016 14:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:48.016 14:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:48.016 14:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:48.016 14:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.016 14:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.016 BaseBdev1_malloc 00:08:48.016 14:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.016 14:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:48.016 14:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.016 14:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.016 true 00:08:48.016 14:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.016 14:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:48.016 14:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.016 14:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.016 [2024-12-09 14:41:26.082949] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:48.016 [2024-12-09 14:41:26.083004] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.016 [2024-12-09 14:41:26.083023] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:48.016 [2024-12-09 14:41:26.083033] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.016 [2024-12-09 14:41:26.085146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.016 [2024-12-09 14:41:26.085187] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:48.016 BaseBdev1 00:08:48.016 14:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.016 14:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:48.016 14:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:48.016 14:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.016 14:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.016 BaseBdev2_malloc 00:08:48.016 14:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.016 14:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:48.016 14:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.016 14:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.275 true 00:08:48.275 14:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.275 14:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:48.275 14:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.275 14:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.275 [2024-12-09 14:41:26.146917] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:48.275 [2024-12-09 14:41:26.146972] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.275 [2024-12-09 14:41:26.146989] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:48.275 [2024-12-09 14:41:26.146999] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.275 [2024-12-09 14:41:26.149098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.275 [2024-12-09 14:41:26.149200] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:48.275 BaseBdev2 00:08:48.275 14:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.275 14:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:48.275 14:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:48.275 14:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.275 14:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.275 BaseBdev3_malloc 00:08:48.275 14:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.275 14:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:48.275 14:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.275 14:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.275 true 00:08:48.275 14:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.275 14:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:48.275 14:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.275 14:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.275 [2024-12-09 14:41:26.229069] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:48.275 [2024-12-09 14:41:26.229127] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.275 [2024-12-09 14:41:26.229145] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:48.275 [2024-12-09 14:41:26.229156] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.275 [2024-12-09 14:41:26.231340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.275 [2024-12-09 14:41:26.231386] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:48.275 BaseBdev3 00:08:48.275 14:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.275 14:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:48.275 14:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.275 14:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.276 [2024-12-09 14:41:26.241158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:48.276 [2024-12-09 14:41:26.243137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:48.276 [2024-12-09 14:41:26.243224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:48.276 [2024-12-09 14:41:26.243463] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:48.276 [2024-12-09 14:41:26.243479] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:48.276 [2024-12-09 14:41:26.243789] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:48.276 [2024-12-09 14:41:26.243991] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:48.276 [2024-12-09 14:41:26.244006] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:48.276 [2024-12-09 14:41:26.244189] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:48.276 14:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.276 14:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:48.276 14:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:48.276 14:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:48.276 14:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:48.276 14:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.276 14:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.276 14:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.276 14:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.276 14:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.276 14:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.276 14:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.276 14:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:48.276 14:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.276 14:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.276 14:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.276 14:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.276 "name": "raid_bdev1", 00:08:48.276 "uuid": "8109de97-6591-40a9-be16-e92be60bf0ae", 00:08:48.276 "strip_size_kb": 64, 00:08:48.276 "state": "online", 00:08:48.276 "raid_level": "raid0", 00:08:48.276 "superblock": true, 00:08:48.276 "num_base_bdevs": 3, 00:08:48.276 "num_base_bdevs_discovered": 3, 00:08:48.276 "num_base_bdevs_operational": 3, 00:08:48.276 "base_bdevs_list": [ 00:08:48.276 { 00:08:48.276 "name": "BaseBdev1", 00:08:48.276 "uuid": "5e7dd792-afd6-5a10-b897-223eda574a84", 00:08:48.276 "is_configured": true, 00:08:48.276 "data_offset": 2048, 00:08:48.276 "data_size": 63488 00:08:48.276 }, 00:08:48.276 { 00:08:48.276 "name": "BaseBdev2", 00:08:48.276 "uuid": "49eda679-11aa-5e87-bcea-6ddd53bb140e", 00:08:48.276 "is_configured": true, 00:08:48.276 "data_offset": 2048, 00:08:48.276 "data_size": 63488 00:08:48.276 }, 00:08:48.276 { 00:08:48.276 "name": "BaseBdev3", 00:08:48.276 "uuid": "f381f5a8-a72d-513f-81ba-1ebc1ff5d5e7", 00:08:48.276 "is_configured": true, 00:08:48.276 "data_offset": 2048, 00:08:48.276 "data_size": 63488 00:08:48.276 } 00:08:48.276 ] 00:08:48.276 }' 00:08:48.276 14:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.276 14:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.534 14:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:48.534 14:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:48.794 [2024-12-09 14:41:26.701747] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:08:49.733 14:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:49.733 14:41:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.734 14:41:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.734 14:41:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.734 14:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:49.734 14:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:49.734 14:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:49.734 14:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:49.734 14:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:49.734 14:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:49.734 14:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:49.734 14:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.734 14:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.734 14:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.734 14:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.734 14:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.734 14:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.734 14:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.734 14:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:49.734 14:41:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.734 14:41:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.734 14:41:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.734 14:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.734 "name": "raid_bdev1", 00:08:49.734 "uuid": "8109de97-6591-40a9-be16-e92be60bf0ae", 00:08:49.734 "strip_size_kb": 64, 00:08:49.734 "state": "online", 00:08:49.734 "raid_level": "raid0", 00:08:49.734 "superblock": true, 00:08:49.734 "num_base_bdevs": 3, 00:08:49.734 "num_base_bdevs_discovered": 3, 00:08:49.734 "num_base_bdevs_operational": 3, 00:08:49.734 "base_bdevs_list": [ 00:08:49.734 { 00:08:49.734 "name": "BaseBdev1", 00:08:49.734 "uuid": "5e7dd792-afd6-5a10-b897-223eda574a84", 00:08:49.734 "is_configured": true, 00:08:49.734 "data_offset": 2048, 00:08:49.734 "data_size": 63488 00:08:49.734 }, 00:08:49.734 { 00:08:49.734 "name": "BaseBdev2", 00:08:49.734 "uuid": "49eda679-11aa-5e87-bcea-6ddd53bb140e", 00:08:49.734 "is_configured": true, 00:08:49.734 "data_offset": 2048, 00:08:49.734 "data_size": 63488 00:08:49.734 }, 00:08:49.734 { 00:08:49.734 "name": "BaseBdev3", 00:08:49.734 "uuid": "f381f5a8-a72d-513f-81ba-1ebc1ff5d5e7", 00:08:49.734 "is_configured": true, 00:08:49.734 "data_offset": 2048, 00:08:49.734 "data_size": 63488 00:08:49.734 } 00:08:49.734 ] 00:08:49.734 }' 00:08:49.734 14:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.734 14:41:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.993 14:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:49.993 14:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.993 14:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.993 [2024-12-09 14:41:28.045761] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:49.993 [2024-12-09 14:41:28.045844] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:49.993 [2024-12-09 14:41:28.048973] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:49.993 [2024-12-09 14:41:28.049059] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:49.993 [2024-12-09 14:41:28.049137] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:49.993 [2024-12-09 14:41:28.049189] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:49.993 14:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.993 { 00:08:49.993 "results": [ 00:08:49.993 { 00:08:49.993 "job": "raid_bdev1", 00:08:49.993 "core_mask": "0x1", 00:08:49.993 "workload": "randrw", 00:08:49.993 "percentage": 50, 00:08:49.993 "status": "finished", 00:08:49.993 "queue_depth": 1, 00:08:49.993 "io_size": 131072, 00:08:49.993 "runtime": 1.344859, 00:08:49.993 "iops": 15425.408909038048, 00:08:49.993 "mibps": 1928.176113629756, 00:08:49.993 "io_failed": 1, 00:08:49.993 "io_timeout": 0, 00:08:49.993 "avg_latency_us": 89.85280815957786, 00:08:49.993 "min_latency_us": 26.829694323144103, 00:08:49.993 "max_latency_us": 1438.071615720524 00:08:49.993 } 00:08:49.993 ], 00:08:49.993 "core_count": 1 00:08:49.993 } 00:08:49.993 14:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 66602 00:08:49.993 14:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 66602 ']' 00:08:49.993 14:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 66602 00:08:49.993 14:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:49.993 14:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:49.993 14:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66602 00:08:49.993 14:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:49.993 14:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:49.993 14:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66602' 00:08:49.993 killing process with pid 66602 00:08:49.993 14:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 66602 00:08:49.993 [2024-12-09 14:41:28.093056] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:49.993 14:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 66602 00:08:50.254 [2024-12-09 14:41:28.327571] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:51.652 14:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qoR0K8vAS7 00:08:51.652 14:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:51.653 14:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:51.653 ************************************ 00:08:51.653 END TEST raid_read_error_test 00:08:51.653 ************************************ 00:08:51.653 14:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:51.653 14:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:51.653 14:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:51.653 14:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:51.653 14:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:51.653 00:08:51.653 real 0m4.465s 00:08:51.653 user 0m5.243s 00:08:51.653 sys 0m0.558s 00:08:51.653 14:41:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.653 14:41:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.653 14:41:29 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:51.653 14:41:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:51.653 14:41:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.653 14:41:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:51.653 ************************************ 00:08:51.653 START TEST raid_write_error_test 00:08:51.653 ************************************ 00:08:51.653 14:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:08:51.653 14:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:51.653 14:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:51.653 14:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:51.653 14:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:51.653 14:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:51.653 14:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:51.653 14:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:51.653 14:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:51.653 14:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:51.653 14:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:51.653 14:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:51.653 14:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:51.653 14:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:51.653 14:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:51.653 14:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:51.653 14:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:51.653 14:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:51.653 14:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:51.653 14:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:51.653 14:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:51.653 14:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:51.653 14:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:51.653 14:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:51.653 14:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:51.653 14:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:51.653 14:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.9TwV3o8jZU 00:08:51.653 14:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=66748 00:08:51.653 14:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:51.653 14:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 66748 00:08:51.653 14:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 66748 ']' 00:08:51.653 14:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.653 14:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:51.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.653 14:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.653 14:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:51.653 14:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.653 [2024-12-09 14:41:29.694283] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:08:51.653 [2024-12-09 14:41:29.694407] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66748 ] 00:08:51.913 [2024-12-09 14:41:29.865040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.913 [2024-12-09 14:41:29.978096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.172 [2024-12-09 14:41:30.175881] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.172 [2024-12-09 14:41:30.175913] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.431 14:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:52.431 14:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:52.431 14:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:52.431 14:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:52.431 14:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.431 14:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.690 BaseBdev1_malloc 00:08:52.690 14:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.690 14:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:52.690 14:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.690 14:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.690 true 00:08:52.690 14:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.690 14:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:52.690 14:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.690 14:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.690 [2024-12-09 14:41:30.584758] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:52.690 [2024-12-09 14:41:30.584827] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.690 [2024-12-09 14:41:30.584855] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:52.690 [2024-12-09 14:41:30.584871] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.690 [2024-12-09 14:41:30.587415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.690 [2024-12-09 14:41:30.587507] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:52.690 BaseBdev1 00:08:52.690 14:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.690 14:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:52.690 14:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:52.690 14:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.690 14:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.690 BaseBdev2_malloc 00:08:52.690 14:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.690 14:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:52.690 14:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.690 14:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.690 true 00:08:52.691 14:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.691 14:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:52.691 14:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.691 14:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.691 [2024-12-09 14:41:30.655075] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:52.691 [2024-12-09 14:41:30.655148] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.691 [2024-12-09 14:41:30.655172] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:52.691 [2024-12-09 14:41:30.655187] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.691 [2024-12-09 14:41:30.657730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.691 [2024-12-09 14:41:30.657781] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:52.691 BaseBdev2 00:08:52.691 14:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.691 14:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:52.691 14:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:52.691 14:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.691 14:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.691 BaseBdev3_malloc 00:08:52.691 14:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.691 14:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:52.691 14:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.691 14:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.691 true 00:08:52.691 14:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.691 14:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:52.691 14:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.691 14:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.691 [2024-12-09 14:41:30.732124] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:52.691 [2024-12-09 14:41:30.732177] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.691 [2024-12-09 14:41:30.732193] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:52.691 [2024-12-09 14:41:30.732203] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.691 [2024-12-09 14:41:30.734215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.691 [2024-12-09 14:41:30.734253] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:52.691 BaseBdev3 00:08:52.691 14:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.691 14:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:52.691 14:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.691 14:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.691 [2024-12-09 14:41:30.744175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:52.691 [2024-12-09 14:41:30.745910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:52.691 [2024-12-09 14:41:30.745993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:52.691 [2024-12-09 14:41:30.746186] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:52.691 [2024-12-09 14:41:30.746199] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:52.691 [2024-12-09 14:41:30.746462] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:52.691 [2024-12-09 14:41:30.746637] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:52.691 [2024-12-09 14:41:30.746651] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:52.691 [2024-12-09 14:41:30.746802] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:52.691 14:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.691 14:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:52.691 14:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:52.691 14:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:52.691 14:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:52.691 14:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.691 14:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.691 14:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.691 14:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.691 14:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.691 14:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.691 14:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.691 14:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.691 14:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:52.691 14:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.691 14:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.691 14:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.691 "name": "raid_bdev1", 00:08:52.691 "uuid": "424407ed-b3d5-477d-b087-1461b04a4f15", 00:08:52.691 "strip_size_kb": 64, 00:08:52.691 "state": "online", 00:08:52.691 "raid_level": "raid0", 00:08:52.691 "superblock": true, 00:08:52.691 "num_base_bdevs": 3, 00:08:52.691 "num_base_bdevs_discovered": 3, 00:08:52.691 "num_base_bdevs_operational": 3, 00:08:52.691 "base_bdevs_list": [ 00:08:52.691 { 00:08:52.691 "name": "BaseBdev1", 00:08:52.691 "uuid": "e6b21d9b-e1a7-53dc-9c88-5b8a708c4bf0", 00:08:52.691 "is_configured": true, 00:08:52.691 "data_offset": 2048, 00:08:52.691 "data_size": 63488 00:08:52.691 }, 00:08:52.691 { 00:08:52.691 "name": "BaseBdev2", 00:08:52.691 "uuid": "dde1560a-58a4-5ecf-908e-95808f9c6e9a", 00:08:52.691 "is_configured": true, 00:08:52.691 "data_offset": 2048, 00:08:52.691 "data_size": 63488 00:08:52.691 }, 00:08:52.691 { 00:08:52.691 "name": "BaseBdev3", 00:08:52.691 "uuid": "f4e437f7-8eb2-5162-a27b-418666120455", 00:08:52.691 "is_configured": true, 00:08:52.691 "data_offset": 2048, 00:08:52.691 "data_size": 63488 00:08:52.691 } 00:08:52.691 ] 00:08:52.691 }' 00:08:52.691 14:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.691 14:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.259 14:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:53.259 14:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:53.259 [2024-12-09 14:41:31.264630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:08:54.198 14:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:54.198 14:41:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.198 14:41:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.198 14:41:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.198 14:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:54.198 14:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:54.198 14:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:54.198 14:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:54.198 14:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:54.198 14:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.198 14:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:54.198 14:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.198 14:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.198 14:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.198 14:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.198 14:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.198 14:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.198 14:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.198 14:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:54.198 14:41:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.198 14:41:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.198 14:41:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.198 14:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.198 "name": "raid_bdev1", 00:08:54.198 "uuid": "424407ed-b3d5-477d-b087-1461b04a4f15", 00:08:54.198 "strip_size_kb": 64, 00:08:54.198 "state": "online", 00:08:54.198 "raid_level": "raid0", 00:08:54.198 "superblock": true, 00:08:54.198 "num_base_bdevs": 3, 00:08:54.198 "num_base_bdevs_discovered": 3, 00:08:54.198 "num_base_bdevs_operational": 3, 00:08:54.198 "base_bdevs_list": [ 00:08:54.198 { 00:08:54.198 "name": "BaseBdev1", 00:08:54.199 "uuid": "e6b21d9b-e1a7-53dc-9c88-5b8a708c4bf0", 00:08:54.199 "is_configured": true, 00:08:54.199 "data_offset": 2048, 00:08:54.199 "data_size": 63488 00:08:54.199 }, 00:08:54.199 { 00:08:54.199 "name": "BaseBdev2", 00:08:54.199 "uuid": "dde1560a-58a4-5ecf-908e-95808f9c6e9a", 00:08:54.199 "is_configured": true, 00:08:54.199 "data_offset": 2048, 00:08:54.199 "data_size": 63488 00:08:54.199 }, 00:08:54.199 { 00:08:54.199 "name": "BaseBdev3", 00:08:54.199 "uuid": "f4e437f7-8eb2-5162-a27b-418666120455", 00:08:54.199 "is_configured": true, 00:08:54.199 "data_offset": 2048, 00:08:54.199 "data_size": 63488 00:08:54.199 } 00:08:54.199 ] 00:08:54.199 }' 00:08:54.199 14:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.199 14:41:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.458 14:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:54.458 14:41:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.458 14:41:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.458 [2024-12-09 14:41:32.534278] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:54.458 [2024-12-09 14:41:32.534368] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:54.458 [2024-12-09 14:41:32.537083] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:54.458 [2024-12-09 14:41:32.537164] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:54.458 [2024-12-09 14:41:32.537221] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:54.458 [2024-12-09 14:41:32.537261] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:54.458 { 00:08:54.458 "results": [ 00:08:54.458 { 00:08:54.458 "job": "raid_bdev1", 00:08:54.458 "core_mask": "0x1", 00:08:54.458 "workload": "randrw", 00:08:54.458 "percentage": 50, 00:08:54.458 "status": "finished", 00:08:54.458 "queue_depth": 1, 00:08:54.458 "io_size": 131072, 00:08:54.458 "runtime": 1.270366, 00:08:54.458 "iops": 15393.988818970281, 00:08:54.458 "mibps": 1924.2486023712852, 00:08:54.458 "io_failed": 1, 00:08:54.458 "io_timeout": 0, 00:08:54.458 "avg_latency_us": 90.08214144166654, 00:08:54.458 "min_latency_us": 26.494323144104804, 00:08:54.458 "max_latency_us": 1373.6803493449781 00:08:54.458 } 00:08:54.458 ], 00:08:54.458 "core_count": 1 00:08:54.458 } 00:08:54.458 14:41:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.458 14:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 66748 00:08:54.458 14:41:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 66748 ']' 00:08:54.458 14:41:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 66748 00:08:54.458 14:41:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:54.458 14:41:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.458 14:41:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66748 00:08:54.718 14:41:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:54.718 14:41:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:54.718 14:41:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66748' 00:08:54.718 killing process with pid 66748 00:08:54.718 14:41:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 66748 00:08:54.718 [2024-12-09 14:41:32.582755] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:54.718 14:41:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 66748 00:08:54.718 [2024-12-09 14:41:32.811613] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:56.099 14:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.9TwV3o8jZU 00:08:56.099 14:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:56.099 14:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:56.099 ************************************ 00:08:56.099 END TEST raid_write_error_test 00:08:56.099 ************************************ 00:08:56.099 14:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.79 00:08:56.099 14:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:56.099 14:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:56.099 14:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:56.099 14:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.79 != \0\.\0\0 ]] 00:08:56.099 00:08:56.099 real 0m4.398s 00:08:56.099 user 0m5.148s 00:08:56.099 sys 0m0.524s 00:08:56.099 14:41:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.099 14:41:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.099 14:41:34 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:56.099 14:41:34 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:08:56.099 14:41:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:56.099 14:41:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.099 14:41:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:56.099 ************************************ 00:08:56.099 START TEST raid_state_function_test 00:08:56.099 ************************************ 00:08:56.099 14:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:08:56.099 14:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:56.099 14:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:56.099 14:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:56.099 14:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:56.099 14:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:56.099 14:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:56.099 14:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:56.099 14:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:56.099 14:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:56.099 14:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:56.099 14:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:56.099 14:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:56.099 14:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:56.099 14:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:56.099 14:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:56.099 14:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:56.099 14:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:56.099 14:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:56.099 14:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:56.099 14:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:56.099 14:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:56.099 14:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:56.099 14:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:56.099 14:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:56.099 14:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:56.099 14:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:56.099 14:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=66886 00:08:56.099 14:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:56.099 14:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66886' 00:08:56.099 Process raid pid: 66886 00:08:56.099 14:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 66886 00:08:56.099 14:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 66886 ']' 00:08:56.099 14:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.099 14:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.099 14:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.099 14:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.099 14:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.099 [2024-12-09 14:41:34.156173] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:08:56.099 [2024-12-09 14:41:34.156387] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.359 [2024-12-09 14:41:34.308901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.359 [2024-12-09 14:41:34.422806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.621 [2024-12-09 14:41:34.626609] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:56.621 [2024-12-09 14:41:34.626669] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:56.908 14:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:56.908 14:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:56.908 14:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:56.908 14:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.908 14:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.908 [2024-12-09 14:41:34.990081] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:56.908 [2024-12-09 14:41:34.990141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:56.908 [2024-12-09 14:41:34.990152] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:56.908 [2024-12-09 14:41:34.990162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:56.908 [2024-12-09 14:41:34.990168] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:56.908 [2024-12-09 14:41:34.990176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:56.908 14:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.908 14:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:56.908 14:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.908 14:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.908 14:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:56.908 14:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.908 14:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.908 14:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.908 14:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.908 14:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.908 14:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.908 14:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.908 14:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.908 14:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.908 14:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.171 14:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.171 14:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.171 "name": "Existed_Raid", 00:08:57.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.171 "strip_size_kb": 64, 00:08:57.171 "state": "configuring", 00:08:57.171 "raid_level": "concat", 00:08:57.171 "superblock": false, 00:08:57.171 "num_base_bdevs": 3, 00:08:57.171 "num_base_bdevs_discovered": 0, 00:08:57.171 "num_base_bdevs_operational": 3, 00:08:57.171 "base_bdevs_list": [ 00:08:57.171 { 00:08:57.171 "name": "BaseBdev1", 00:08:57.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.171 "is_configured": false, 00:08:57.171 "data_offset": 0, 00:08:57.171 "data_size": 0 00:08:57.171 }, 00:08:57.171 { 00:08:57.171 "name": "BaseBdev2", 00:08:57.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.171 "is_configured": false, 00:08:57.171 "data_offset": 0, 00:08:57.171 "data_size": 0 00:08:57.171 }, 00:08:57.171 { 00:08:57.171 "name": "BaseBdev3", 00:08:57.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.171 "is_configured": false, 00:08:57.171 "data_offset": 0, 00:08:57.171 "data_size": 0 00:08:57.171 } 00:08:57.171 ] 00:08:57.171 }' 00:08:57.171 14:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.171 14:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.435 14:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:57.436 14:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.436 14:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.436 [2024-12-09 14:41:35.453290] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:57.436 [2024-12-09 14:41:35.453420] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:57.436 14:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.436 14:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:57.436 14:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.436 14:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.436 [2024-12-09 14:41:35.465265] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:57.436 [2024-12-09 14:41:35.465379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:57.436 [2024-12-09 14:41:35.465428] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:57.436 [2024-12-09 14:41:35.465466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:57.436 [2024-12-09 14:41:35.465524] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:57.436 [2024-12-09 14:41:35.465591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:57.436 14:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.436 14:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:57.436 14:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.436 14:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.436 [2024-12-09 14:41:35.514296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:57.436 BaseBdev1 00:08:57.436 14:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.436 14:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:57.436 14:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:57.436 14:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:57.436 14:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:57.436 14:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:57.436 14:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:57.436 14:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:57.436 14:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.436 14:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.436 14:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.436 14:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:57.436 14:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.436 14:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.436 [ 00:08:57.436 { 00:08:57.436 "name": "BaseBdev1", 00:08:57.436 "aliases": [ 00:08:57.436 "8417fd51-5070-4cb0-8490-e23709cdf81a" 00:08:57.436 ], 00:08:57.436 "product_name": "Malloc disk", 00:08:57.436 "block_size": 512, 00:08:57.436 "num_blocks": 65536, 00:08:57.436 "uuid": "8417fd51-5070-4cb0-8490-e23709cdf81a", 00:08:57.436 "assigned_rate_limits": { 00:08:57.436 "rw_ios_per_sec": 0, 00:08:57.436 "rw_mbytes_per_sec": 0, 00:08:57.436 "r_mbytes_per_sec": 0, 00:08:57.436 "w_mbytes_per_sec": 0 00:08:57.436 }, 00:08:57.436 "claimed": true, 00:08:57.436 "claim_type": "exclusive_write", 00:08:57.436 "zoned": false, 00:08:57.436 "supported_io_types": { 00:08:57.436 "read": true, 00:08:57.436 "write": true, 00:08:57.436 "unmap": true, 00:08:57.436 "flush": true, 00:08:57.436 "reset": true, 00:08:57.436 "nvme_admin": false, 00:08:57.436 "nvme_io": false, 00:08:57.436 "nvme_io_md": false, 00:08:57.436 "write_zeroes": true, 00:08:57.436 "zcopy": true, 00:08:57.436 "get_zone_info": false, 00:08:57.436 "zone_management": false, 00:08:57.436 "zone_append": false, 00:08:57.436 "compare": false, 00:08:57.436 "compare_and_write": false, 00:08:57.436 "abort": true, 00:08:57.436 "seek_hole": false, 00:08:57.436 "seek_data": false, 00:08:57.436 "copy": true, 00:08:57.436 "nvme_iov_md": false 00:08:57.436 }, 00:08:57.436 "memory_domains": [ 00:08:57.436 { 00:08:57.436 "dma_device_id": "system", 00:08:57.436 "dma_device_type": 1 00:08:57.436 }, 00:08:57.436 { 00:08:57.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.436 "dma_device_type": 2 00:08:57.436 } 00:08:57.436 ], 00:08:57.436 "driver_specific": {} 00:08:57.436 } 00:08:57.436 ] 00:08:57.436 14:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.436 14:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:57.436 14:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:57.436 14:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.436 14:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.436 14:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:57.436 14:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.436 14:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.436 14:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.436 14:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.436 14:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.436 14:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.695 14:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.695 14:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.695 14:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.695 14:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.695 14:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.695 14:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.695 "name": "Existed_Raid", 00:08:57.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.695 "strip_size_kb": 64, 00:08:57.695 "state": "configuring", 00:08:57.695 "raid_level": "concat", 00:08:57.695 "superblock": false, 00:08:57.695 "num_base_bdevs": 3, 00:08:57.695 "num_base_bdevs_discovered": 1, 00:08:57.695 "num_base_bdevs_operational": 3, 00:08:57.695 "base_bdevs_list": [ 00:08:57.695 { 00:08:57.695 "name": "BaseBdev1", 00:08:57.695 "uuid": "8417fd51-5070-4cb0-8490-e23709cdf81a", 00:08:57.695 "is_configured": true, 00:08:57.695 "data_offset": 0, 00:08:57.695 "data_size": 65536 00:08:57.695 }, 00:08:57.695 { 00:08:57.695 "name": "BaseBdev2", 00:08:57.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.695 "is_configured": false, 00:08:57.695 "data_offset": 0, 00:08:57.695 "data_size": 0 00:08:57.695 }, 00:08:57.695 { 00:08:57.695 "name": "BaseBdev3", 00:08:57.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.695 "is_configured": false, 00:08:57.695 "data_offset": 0, 00:08:57.695 "data_size": 0 00:08:57.695 } 00:08:57.695 ] 00:08:57.695 }' 00:08:57.695 14:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.695 14:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.954 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:57.954 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.954 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.954 [2024-12-09 14:41:36.013553] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:57.954 [2024-12-09 14:41:36.013701] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:57.954 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.954 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:57.954 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.954 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.954 [2024-12-09 14:41:36.025584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:57.954 [2024-12-09 14:41:36.027546] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:57.954 [2024-12-09 14:41:36.027600] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:57.954 [2024-12-09 14:41:36.027612] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:57.954 [2024-12-09 14:41:36.027623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:57.955 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.955 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:57.955 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:57.955 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:57.955 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.955 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.955 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:57.955 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.955 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.955 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.955 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.955 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.955 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.955 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.955 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.955 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.955 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.955 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.214 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.214 "name": "Existed_Raid", 00:08:58.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.214 "strip_size_kb": 64, 00:08:58.214 "state": "configuring", 00:08:58.214 "raid_level": "concat", 00:08:58.214 "superblock": false, 00:08:58.214 "num_base_bdevs": 3, 00:08:58.214 "num_base_bdevs_discovered": 1, 00:08:58.214 "num_base_bdevs_operational": 3, 00:08:58.214 "base_bdevs_list": [ 00:08:58.214 { 00:08:58.214 "name": "BaseBdev1", 00:08:58.214 "uuid": "8417fd51-5070-4cb0-8490-e23709cdf81a", 00:08:58.214 "is_configured": true, 00:08:58.214 "data_offset": 0, 00:08:58.214 "data_size": 65536 00:08:58.214 }, 00:08:58.214 { 00:08:58.214 "name": "BaseBdev2", 00:08:58.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.214 "is_configured": false, 00:08:58.214 "data_offset": 0, 00:08:58.214 "data_size": 0 00:08:58.214 }, 00:08:58.214 { 00:08:58.214 "name": "BaseBdev3", 00:08:58.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.214 "is_configured": false, 00:08:58.214 "data_offset": 0, 00:08:58.214 "data_size": 0 00:08:58.214 } 00:08:58.214 ] 00:08:58.214 }' 00:08:58.214 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.214 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.474 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:58.474 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.474 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.474 [2024-12-09 14:41:36.438206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:58.474 BaseBdev2 00:08:58.474 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.474 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:58.474 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:58.474 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:58.474 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:58.474 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:58.474 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:58.474 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:58.474 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.474 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.474 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.474 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:58.474 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.474 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.474 [ 00:08:58.474 { 00:08:58.474 "name": "BaseBdev2", 00:08:58.474 "aliases": [ 00:08:58.474 "4cacc061-a4bf-498f-a6ae-37dab15a8567" 00:08:58.474 ], 00:08:58.474 "product_name": "Malloc disk", 00:08:58.474 "block_size": 512, 00:08:58.474 "num_blocks": 65536, 00:08:58.474 "uuid": "4cacc061-a4bf-498f-a6ae-37dab15a8567", 00:08:58.474 "assigned_rate_limits": { 00:08:58.474 "rw_ios_per_sec": 0, 00:08:58.474 "rw_mbytes_per_sec": 0, 00:08:58.474 "r_mbytes_per_sec": 0, 00:08:58.474 "w_mbytes_per_sec": 0 00:08:58.474 }, 00:08:58.474 "claimed": true, 00:08:58.474 "claim_type": "exclusive_write", 00:08:58.474 "zoned": false, 00:08:58.474 "supported_io_types": { 00:08:58.474 "read": true, 00:08:58.474 "write": true, 00:08:58.474 "unmap": true, 00:08:58.474 "flush": true, 00:08:58.474 "reset": true, 00:08:58.474 "nvme_admin": false, 00:08:58.474 "nvme_io": false, 00:08:58.474 "nvme_io_md": false, 00:08:58.474 "write_zeroes": true, 00:08:58.474 "zcopy": true, 00:08:58.474 "get_zone_info": false, 00:08:58.474 "zone_management": false, 00:08:58.474 "zone_append": false, 00:08:58.474 "compare": false, 00:08:58.474 "compare_and_write": false, 00:08:58.474 "abort": true, 00:08:58.474 "seek_hole": false, 00:08:58.474 "seek_data": false, 00:08:58.474 "copy": true, 00:08:58.474 "nvme_iov_md": false 00:08:58.474 }, 00:08:58.474 "memory_domains": [ 00:08:58.474 { 00:08:58.474 "dma_device_id": "system", 00:08:58.474 "dma_device_type": 1 00:08:58.474 }, 00:08:58.474 { 00:08:58.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.474 "dma_device_type": 2 00:08:58.474 } 00:08:58.474 ], 00:08:58.474 "driver_specific": {} 00:08:58.474 } 00:08:58.474 ] 00:08:58.474 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.474 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:58.474 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:58.474 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:58.474 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:58.474 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.474 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.474 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:58.474 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.474 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.474 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.474 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.474 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.474 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.474 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.474 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.474 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.474 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.474 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.474 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.474 "name": "Existed_Raid", 00:08:58.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.474 "strip_size_kb": 64, 00:08:58.474 "state": "configuring", 00:08:58.474 "raid_level": "concat", 00:08:58.474 "superblock": false, 00:08:58.474 "num_base_bdevs": 3, 00:08:58.474 "num_base_bdevs_discovered": 2, 00:08:58.474 "num_base_bdevs_operational": 3, 00:08:58.474 "base_bdevs_list": [ 00:08:58.474 { 00:08:58.474 "name": "BaseBdev1", 00:08:58.474 "uuid": "8417fd51-5070-4cb0-8490-e23709cdf81a", 00:08:58.474 "is_configured": true, 00:08:58.474 "data_offset": 0, 00:08:58.474 "data_size": 65536 00:08:58.474 }, 00:08:58.474 { 00:08:58.474 "name": "BaseBdev2", 00:08:58.474 "uuid": "4cacc061-a4bf-498f-a6ae-37dab15a8567", 00:08:58.474 "is_configured": true, 00:08:58.474 "data_offset": 0, 00:08:58.474 "data_size": 65536 00:08:58.474 }, 00:08:58.474 { 00:08:58.475 "name": "BaseBdev3", 00:08:58.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.475 "is_configured": false, 00:08:58.475 "data_offset": 0, 00:08:58.475 "data_size": 0 00:08:58.475 } 00:08:58.475 ] 00:08:58.475 }' 00:08:58.475 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.475 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.043 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:59.043 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.043 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.043 [2024-12-09 14:41:36.945181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:59.043 [2024-12-09 14:41:36.945316] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:59.043 [2024-12-09 14:41:36.945349] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:59.043 [2024-12-09 14:41:36.945653] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:59.043 [2024-12-09 14:41:36.946040] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:59.043 [2024-12-09 14:41:36.946087] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:59.043 [2024-12-09 14:41:36.946368] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.043 BaseBdev3 00:08:59.043 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.043 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:59.043 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:59.043 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:59.043 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:59.043 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:59.043 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:59.043 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:59.044 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.044 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.044 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.044 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:59.044 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.044 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.044 [ 00:08:59.044 { 00:08:59.044 "name": "BaseBdev3", 00:08:59.044 "aliases": [ 00:08:59.044 "d5b976e9-61e4-4c8a-992a-a91c3c1a4021" 00:08:59.044 ], 00:08:59.044 "product_name": "Malloc disk", 00:08:59.044 "block_size": 512, 00:08:59.044 "num_blocks": 65536, 00:08:59.044 "uuid": "d5b976e9-61e4-4c8a-992a-a91c3c1a4021", 00:08:59.044 "assigned_rate_limits": { 00:08:59.044 "rw_ios_per_sec": 0, 00:08:59.044 "rw_mbytes_per_sec": 0, 00:08:59.044 "r_mbytes_per_sec": 0, 00:08:59.044 "w_mbytes_per_sec": 0 00:08:59.044 }, 00:08:59.044 "claimed": true, 00:08:59.044 "claim_type": "exclusive_write", 00:08:59.044 "zoned": false, 00:08:59.044 "supported_io_types": { 00:08:59.044 "read": true, 00:08:59.044 "write": true, 00:08:59.044 "unmap": true, 00:08:59.044 "flush": true, 00:08:59.044 "reset": true, 00:08:59.044 "nvme_admin": false, 00:08:59.044 "nvme_io": false, 00:08:59.044 "nvme_io_md": false, 00:08:59.044 "write_zeroes": true, 00:08:59.044 "zcopy": true, 00:08:59.044 "get_zone_info": false, 00:08:59.044 "zone_management": false, 00:08:59.044 "zone_append": false, 00:08:59.044 "compare": false, 00:08:59.044 "compare_and_write": false, 00:08:59.044 "abort": true, 00:08:59.044 "seek_hole": false, 00:08:59.044 "seek_data": false, 00:08:59.044 "copy": true, 00:08:59.044 "nvme_iov_md": false 00:08:59.044 }, 00:08:59.044 "memory_domains": [ 00:08:59.044 { 00:08:59.044 "dma_device_id": "system", 00:08:59.044 "dma_device_type": 1 00:08:59.044 }, 00:08:59.044 { 00:08:59.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.044 "dma_device_type": 2 00:08:59.044 } 00:08:59.044 ], 00:08:59.044 "driver_specific": {} 00:08:59.044 } 00:08:59.044 ] 00:08:59.044 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.044 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:59.044 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:59.044 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:59.044 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:59.044 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.044 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:59.044 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:59.044 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.044 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.044 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.044 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.044 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.044 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.044 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.044 14:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.044 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.044 14:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.044 14:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.044 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.044 "name": "Existed_Raid", 00:08:59.044 "uuid": "0b7cf6ae-78ab-41f0-8865-2f4d62886b0f", 00:08:59.044 "strip_size_kb": 64, 00:08:59.044 "state": "online", 00:08:59.044 "raid_level": "concat", 00:08:59.044 "superblock": false, 00:08:59.044 "num_base_bdevs": 3, 00:08:59.044 "num_base_bdevs_discovered": 3, 00:08:59.044 "num_base_bdevs_operational": 3, 00:08:59.044 "base_bdevs_list": [ 00:08:59.044 { 00:08:59.044 "name": "BaseBdev1", 00:08:59.044 "uuid": "8417fd51-5070-4cb0-8490-e23709cdf81a", 00:08:59.044 "is_configured": true, 00:08:59.044 "data_offset": 0, 00:08:59.044 "data_size": 65536 00:08:59.044 }, 00:08:59.044 { 00:08:59.044 "name": "BaseBdev2", 00:08:59.044 "uuid": "4cacc061-a4bf-498f-a6ae-37dab15a8567", 00:08:59.044 "is_configured": true, 00:08:59.044 "data_offset": 0, 00:08:59.044 "data_size": 65536 00:08:59.044 }, 00:08:59.044 { 00:08:59.044 "name": "BaseBdev3", 00:08:59.044 "uuid": "d5b976e9-61e4-4c8a-992a-a91c3c1a4021", 00:08:59.044 "is_configured": true, 00:08:59.044 "data_offset": 0, 00:08:59.044 "data_size": 65536 00:08:59.044 } 00:08:59.044 ] 00:08:59.044 }' 00:08:59.044 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.044 14:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.614 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:59.614 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:59.614 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:59.614 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:59.614 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:59.614 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:59.614 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:59.614 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:59.614 14:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.614 14:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.614 [2024-12-09 14:41:37.436700] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:59.614 14:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.614 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:59.614 "name": "Existed_Raid", 00:08:59.614 "aliases": [ 00:08:59.614 "0b7cf6ae-78ab-41f0-8865-2f4d62886b0f" 00:08:59.614 ], 00:08:59.614 "product_name": "Raid Volume", 00:08:59.614 "block_size": 512, 00:08:59.614 "num_blocks": 196608, 00:08:59.614 "uuid": "0b7cf6ae-78ab-41f0-8865-2f4d62886b0f", 00:08:59.614 "assigned_rate_limits": { 00:08:59.614 "rw_ios_per_sec": 0, 00:08:59.614 "rw_mbytes_per_sec": 0, 00:08:59.614 "r_mbytes_per_sec": 0, 00:08:59.614 "w_mbytes_per_sec": 0 00:08:59.614 }, 00:08:59.614 "claimed": false, 00:08:59.614 "zoned": false, 00:08:59.614 "supported_io_types": { 00:08:59.614 "read": true, 00:08:59.614 "write": true, 00:08:59.614 "unmap": true, 00:08:59.614 "flush": true, 00:08:59.614 "reset": true, 00:08:59.614 "nvme_admin": false, 00:08:59.614 "nvme_io": false, 00:08:59.614 "nvme_io_md": false, 00:08:59.614 "write_zeroes": true, 00:08:59.614 "zcopy": false, 00:08:59.614 "get_zone_info": false, 00:08:59.614 "zone_management": false, 00:08:59.614 "zone_append": false, 00:08:59.614 "compare": false, 00:08:59.614 "compare_and_write": false, 00:08:59.614 "abort": false, 00:08:59.614 "seek_hole": false, 00:08:59.614 "seek_data": false, 00:08:59.614 "copy": false, 00:08:59.614 "nvme_iov_md": false 00:08:59.614 }, 00:08:59.614 "memory_domains": [ 00:08:59.614 { 00:08:59.614 "dma_device_id": "system", 00:08:59.614 "dma_device_type": 1 00:08:59.614 }, 00:08:59.614 { 00:08:59.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.614 "dma_device_type": 2 00:08:59.614 }, 00:08:59.614 { 00:08:59.614 "dma_device_id": "system", 00:08:59.614 "dma_device_type": 1 00:08:59.614 }, 00:08:59.614 { 00:08:59.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.614 "dma_device_type": 2 00:08:59.614 }, 00:08:59.614 { 00:08:59.614 "dma_device_id": "system", 00:08:59.614 "dma_device_type": 1 00:08:59.614 }, 00:08:59.614 { 00:08:59.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.614 "dma_device_type": 2 00:08:59.614 } 00:08:59.614 ], 00:08:59.614 "driver_specific": { 00:08:59.614 "raid": { 00:08:59.614 "uuid": "0b7cf6ae-78ab-41f0-8865-2f4d62886b0f", 00:08:59.614 "strip_size_kb": 64, 00:08:59.614 "state": "online", 00:08:59.614 "raid_level": "concat", 00:08:59.614 "superblock": false, 00:08:59.614 "num_base_bdevs": 3, 00:08:59.614 "num_base_bdevs_discovered": 3, 00:08:59.614 "num_base_bdevs_operational": 3, 00:08:59.614 "base_bdevs_list": [ 00:08:59.614 { 00:08:59.614 "name": "BaseBdev1", 00:08:59.614 "uuid": "8417fd51-5070-4cb0-8490-e23709cdf81a", 00:08:59.614 "is_configured": true, 00:08:59.614 "data_offset": 0, 00:08:59.614 "data_size": 65536 00:08:59.614 }, 00:08:59.614 { 00:08:59.614 "name": "BaseBdev2", 00:08:59.614 "uuid": "4cacc061-a4bf-498f-a6ae-37dab15a8567", 00:08:59.614 "is_configured": true, 00:08:59.614 "data_offset": 0, 00:08:59.614 "data_size": 65536 00:08:59.614 }, 00:08:59.614 { 00:08:59.614 "name": "BaseBdev3", 00:08:59.614 "uuid": "d5b976e9-61e4-4c8a-992a-a91c3c1a4021", 00:08:59.614 "is_configured": true, 00:08:59.614 "data_offset": 0, 00:08:59.614 "data_size": 65536 00:08:59.614 } 00:08:59.614 ] 00:08:59.614 } 00:08:59.614 } 00:08:59.614 }' 00:08:59.614 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:59.614 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:59.614 BaseBdev2 00:08:59.614 BaseBdev3' 00:08:59.614 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.614 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:59.614 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.614 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.614 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:59.614 14:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.614 14:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.614 14:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.614 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.614 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.614 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.614 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:59.614 14:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.614 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.614 14:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.614 14:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.614 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.614 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.614 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.614 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:59.615 14:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.615 14:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.615 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.615 14:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.615 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.615 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.615 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:59.615 14:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.615 14:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.615 [2024-12-09 14:41:37.688022] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:59.615 [2024-12-09 14:41:37.688093] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:59.615 [2024-12-09 14:41:37.688161] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:59.873 14:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.873 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:59.873 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:59.873 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:59.873 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:59.873 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:59.873 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:59.873 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.873 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:59.873 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:59.873 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.873 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:59.873 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.873 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.873 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.873 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.873 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.873 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.873 14:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.873 14:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.873 14:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.873 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.873 "name": "Existed_Raid", 00:08:59.873 "uuid": "0b7cf6ae-78ab-41f0-8865-2f4d62886b0f", 00:08:59.873 "strip_size_kb": 64, 00:08:59.873 "state": "offline", 00:08:59.873 "raid_level": "concat", 00:08:59.873 "superblock": false, 00:08:59.873 "num_base_bdevs": 3, 00:08:59.873 "num_base_bdevs_discovered": 2, 00:08:59.873 "num_base_bdevs_operational": 2, 00:08:59.873 "base_bdevs_list": [ 00:08:59.873 { 00:08:59.873 "name": null, 00:08:59.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.873 "is_configured": false, 00:08:59.873 "data_offset": 0, 00:08:59.873 "data_size": 65536 00:08:59.873 }, 00:08:59.873 { 00:08:59.873 "name": "BaseBdev2", 00:08:59.873 "uuid": "4cacc061-a4bf-498f-a6ae-37dab15a8567", 00:08:59.873 "is_configured": true, 00:08:59.873 "data_offset": 0, 00:08:59.873 "data_size": 65536 00:08:59.873 }, 00:08:59.873 { 00:08:59.874 "name": "BaseBdev3", 00:08:59.874 "uuid": "d5b976e9-61e4-4c8a-992a-a91c3c1a4021", 00:08:59.874 "is_configured": true, 00:08:59.874 "data_offset": 0, 00:08:59.874 "data_size": 65536 00:08:59.874 } 00:08:59.874 ] 00:08:59.874 }' 00:08:59.874 14:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.874 14:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.134 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:00.134 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:00.134 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.134 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:00.134 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.134 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.134 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.393 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:00.393 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:00.393 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:00.393 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.393 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.393 [2024-12-09 14:41:38.285323] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:00.393 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.393 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:00.393 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:00.393 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.393 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.393 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:00.393 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.393 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.393 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:00.393 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:00.393 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:00.393 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.393 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.393 [2024-12-09 14:41:38.437274] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:00.393 [2024-12-09 14:41:38.437330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.654 BaseBdev2 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.654 [ 00:09:00.654 { 00:09:00.654 "name": "BaseBdev2", 00:09:00.654 "aliases": [ 00:09:00.654 "2d150767-1217-42b9-aaf3-c3bf1ae1e7bc" 00:09:00.654 ], 00:09:00.654 "product_name": "Malloc disk", 00:09:00.654 "block_size": 512, 00:09:00.654 "num_blocks": 65536, 00:09:00.654 "uuid": "2d150767-1217-42b9-aaf3-c3bf1ae1e7bc", 00:09:00.654 "assigned_rate_limits": { 00:09:00.654 "rw_ios_per_sec": 0, 00:09:00.654 "rw_mbytes_per_sec": 0, 00:09:00.654 "r_mbytes_per_sec": 0, 00:09:00.654 "w_mbytes_per_sec": 0 00:09:00.654 }, 00:09:00.654 "claimed": false, 00:09:00.654 "zoned": false, 00:09:00.654 "supported_io_types": { 00:09:00.654 "read": true, 00:09:00.654 "write": true, 00:09:00.654 "unmap": true, 00:09:00.654 "flush": true, 00:09:00.654 "reset": true, 00:09:00.654 "nvme_admin": false, 00:09:00.654 "nvme_io": false, 00:09:00.654 "nvme_io_md": false, 00:09:00.654 "write_zeroes": true, 00:09:00.654 "zcopy": true, 00:09:00.654 "get_zone_info": false, 00:09:00.654 "zone_management": false, 00:09:00.654 "zone_append": false, 00:09:00.654 "compare": false, 00:09:00.654 "compare_and_write": false, 00:09:00.654 "abort": true, 00:09:00.654 "seek_hole": false, 00:09:00.654 "seek_data": false, 00:09:00.654 "copy": true, 00:09:00.654 "nvme_iov_md": false 00:09:00.654 }, 00:09:00.654 "memory_domains": [ 00:09:00.654 { 00:09:00.654 "dma_device_id": "system", 00:09:00.654 "dma_device_type": 1 00:09:00.654 }, 00:09:00.654 { 00:09:00.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.654 "dma_device_type": 2 00:09:00.654 } 00:09:00.654 ], 00:09:00.654 "driver_specific": {} 00:09:00.654 } 00:09:00.654 ] 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.654 BaseBdev3 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.654 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.654 [ 00:09:00.654 { 00:09:00.654 "name": "BaseBdev3", 00:09:00.654 "aliases": [ 00:09:00.654 "fb1e074f-f6e1-43ee-b7eb-a4e7f50b0003" 00:09:00.654 ], 00:09:00.654 "product_name": "Malloc disk", 00:09:00.654 "block_size": 512, 00:09:00.654 "num_blocks": 65536, 00:09:00.654 "uuid": "fb1e074f-f6e1-43ee-b7eb-a4e7f50b0003", 00:09:00.654 "assigned_rate_limits": { 00:09:00.654 "rw_ios_per_sec": 0, 00:09:00.654 "rw_mbytes_per_sec": 0, 00:09:00.654 "r_mbytes_per_sec": 0, 00:09:00.654 "w_mbytes_per_sec": 0 00:09:00.654 }, 00:09:00.655 "claimed": false, 00:09:00.655 "zoned": false, 00:09:00.655 "supported_io_types": { 00:09:00.655 "read": true, 00:09:00.655 "write": true, 00:09:00.655 "unmap": true, 00:09:00.655 "flush": true, 00:09:00.655 "reset": true, 00:09:00.655 "nvme_admin": false, 00:09:00.655 "nvme_io": false, 00:09:00.655 "nvme_io_md": false, 00:09:00.655 "write_zeroes": true, 00:09:00.655 "zcopy": true, 00:09:00.655 "get_zone_info": false, 00:09:00.655 "zone_management": false, 00:09:00.655 "zone_append": false, 00:09:00.655 "compare": false, 00:09:00.655 "compare_and_write": false, 00:09:00.655 "abort": true, 00:09:00.655 "seek_hole": false, 00:09:00.655 "seek_data": false, 00:09:00.655 "copy": true, 00:09:00.655 "nvme_iov_md": false 00:09:00.655 }, 00:09:00.655 "memory_domains": [ 00:09:00.655 { 00:09:00.655 "dma_device_id": "system", 00:09:00.655 "dma_device_type": 1 00:09:00.655 }, 00:09:00.655 { 00:09:00.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.655 "dma_device_type": 2 00:09:00.655 } 00:09:00.655 ], 00:09:00.655 "driver_specific": {} 00:09:00.655 } 00:09:00.655 ] 00:09:00.655 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.655 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:00.655 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:00.655 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:00.655 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:00.655 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.655 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.655 [2024-12-09 14:41:38.757386] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:00.655 [2024-12-09 14:41:38.757469] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:00.655 [2024-12-09 14:41:38.757509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:00.655 [2024-12-09 14:41:38.759277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:00.655 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.655 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:00.655 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.655 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.655 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:00.655 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.655 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.655 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.655 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.655 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.655 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.655 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.655 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.655 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.655 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.915 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.915 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.915 "name": "Existed_Raid", 00:09:00.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.915 "strip_size_kb": 64, 00:09:00.915 "state": "configuring", 00:09:00.915 "raid_level": "concat", 00:09:00.915 "superblock": false, 00:09:00.915 "num_base_bdevs": 3, 00:09:00.915 "num_base_bdevs_discovered": 2, 00:09:00.915 "num_base_bdevs_operational": 3, 00:09:00.915 "base_bdevs_list": [ 00:09:00.915 { 00:09:00.915 "name": "BaseBdev1", 00:09:00.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.915 "is_configured": false, 00:09:00.915 "data_offset": 0, 00:09:00.915 "data_size": 0 00:09:00.915 }, 00:09:00.915 { 00:09:00.915 "name": "BaseBdev2", 00:09:00.915 "uuid": "2d150767-1217-42b9-aaf3-c3bf1ae1e7bc", 00:09:00.915 "is_configured": true, 00:09:00.915 "data_offset": 0, 00:09:00.915 "data_size": 65536 00:09:00.915 }, 00:09:00.915 { 00:09:00.915 "name": "BaseBdev3", 00:09:00.915 "uuid": "fb1e074f-f6e1-43ee-b7eb-a4e7f50b0003", 00:09:00.915 "is_configured": true, 00:09:00.915 "data_offset": 0, 00:09:00.915 "data_size": 65536 00:09:00.915 } 00:09:00.915 ] 00:09:00.915 }' 00:09:00.915 14:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.915 14:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.174 14:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:01.174 14:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.174 14:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.174 [2024-12-09 14:41:39.192701] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:01.174 14:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.174 14:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:01.174 14:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.174 14:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.174 14:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.174 14:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.174 14:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.174 14:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.174 14:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.174 14:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.174 14:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.174 14:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.174 14:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.174 14:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.174 14:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.174 14:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.174 14:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.174 "name": "Existed_Raid", 00:09:01.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.174 "strip_size_kb": 64, 00:09:01.174 "state": "configuring", 00:09:01.174 "raid_level": "concat", 00:09:01.174 "superblock": false, 00:09:01.174 "num_base_bdevs": 3, 00:09:01.174 "num_base_bdevs_discovered": 1, 00:09:01.174 "num_base_bdevs_operational": 3, 00:09:01.174 "base_bdevs_list": [ 00:09:01.174 { 00:09:01.174 "name": "BaseBdev1", 00:09:01.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.174 "is_configured": false, 00:09:01.174 "data_offset": 0, 00:09:01.174 "data_size": 0 00:09:01.174 }, 00:09:01.174 { 00:09:01.174 "name": null, 00:09:01.174 "uuid": "2d150767-1217-42b9-aaf3-c3bf1ae1e7bc", 00:09:01.174 "is_configured": false, 00:09:01.174 "data_offset": 0, 00:09:01.174 "data_size": 65536 00:09:01.174 }, 00:09:01.174 { 00:09:01.174 "name": "BaseBdev3", 00:09:01.174 "uuid": "fb1e074f-f6e1-43ee-b7eb-a4e7f50b0003", 00:09:01.174 "is_configured": true, 00:09:01.174 "data_offset": 0, 00:09:01.174 "data_size": 65536 00:09:01.174 } 00:09:01.174 ] 00:09:01.174 }' 00:09:01.174 14:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.174 14:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.744 14:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.744 14:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.744 14:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.744 14:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:01.744 14:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.744 14:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:01.744 14:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:01.744 14:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.744 14:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.744 [2024-12-09 14:41:39.732137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:01.744 BaseBdev1 00:09:01.745 14:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.745 14:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:01.745 14:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:01.745 14:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:01.745 14:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:01.745 14:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:01.745 14:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:01.745 14:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:01.745 14:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.745 14:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.745 14:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.745 14:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:01.745 14:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.745 14:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.745 [ 00:09:01.745 { 00:09:01.745 "name": "BaseBdev1", 00:09:01.745 "aliases": [ 00:09:01.745 "e3e9f0b7-d15e-478f-9f5e-c87ca84bdcf9" 00:09:01.745 ], 00:09:01.745 "product_name": "Malloc disk", 00:09:01.745 "block_size": 512, 00:09:01.745 "num_blocks": 65536, 00:09:01.745 "uuid": "e3e9f0b7-d15e-478f-9f5e-c87ca84bdcf9", 00:09:01.745 "assigned_rate_limits": { 00:09:01.745 "rw_ios_per_sec": 0, 00:09:01.745 "rw_mbytes_per_sec": 0, 00:09:01.745 "r_mbytes_per_sec": 0, 00:09:01.745 "w_mbytes_per_sec": 0 00:09:01.745 }, 00:09:01.745 "claimed": true, 00:09:01.745 "claim_type": "exclusive_write", 00:09:01.745 "zoned": false, 00:09:01.745 "supported_io_types": { 00:09:01.745 "read": true, 00:09:01.745 "write": true, 00:09:01.745 "unmap": true, 00:09:01.745 "flush": true, 00:09:01.745 "reset": true, 00:09:01.745 "nvme_admin": false, 00:09:01.745 "nvme_io": false, 00:09:01.745 "nvme_io_md": false, 00:09:01.745 "write_zeroes": true, 00:09:01.745 "zcopy": true, 00:09:01.745 "get_zone_info": false, 00:09:01.745 "zone_management": false, 00:09:01.745 "zone_append": false, 00:09:01.745 "compare": false, 00:09:01.745 "compare_and_write": false, 00:09:01.745 "abort": true, 00:09:01.745 "seek_hole": false, 00:09:01.745 "seek_data": false, 00:09:01.745 "copy": true, 00:09:01.745 "nvme_iov_md": false 00:09:01.745 }, 00:09:01.745 "memory_domains": [ 00:09:01.745 { 00:09:01.745 "dma_device_id": "system", 00:09:01.745 "dma_device_type": 1 00:09:01.745 }, 00:09:01.745 { 00:09:01.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.745 "dma_device_type": 2 00:09:01.745 } 00:09:01.745 ], 00:09:01.745 "driver_specific": {} 00:09:01.745 } 00:09:01.745 ] 00:09:01.745 14:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.745 14:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:01.745 14:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:01.745 14:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.745 14:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.745 14:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.745 14:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.745 14:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.745 14:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.745 14:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.745 14:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.745 14:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.745 14:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.745 14:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.745 14:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.745 14:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.745 14:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.745 14:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.745 "name": "Existed_Raid", 00:09:01.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.745 "strip_size_kb": 64, 00:09:01.745 "state": "configuring", 00:09:01.745 "raid_level": "concat", 00:09:01.745 "superblock": false, 00:09:01.745 "num_base_bdevs": 3, 00:09:01.745 "num_base_bdevs_discovered": 2, 00:09:01.745 "num_base_bdevs_operational": 3, 00:09:01.745 "base_bdevs_list": [ 00:09:01.745 { 00:09:01.745 "name": "BaseBdev1", 00:09:01.745 "uuid": "e3e9f0b7-d15e-478f-9f5e-c87ca84bdcf9", 00:09:01.745 "is_configured": true, 00:09:01.745 "data_offset": 0, 00:09:01.745 "data_size": 65536 00:09:01.745 }, 00:09:01.745 { 00:09:01.745 "name": null, 00:09:01.745 "uuid": "2d150767-1217-42b9-aaf3-c3bf1ae1e7bc", 00:09:01.745 "is_configured": false, 00:09:01.745 "data_offset": 0, 00:09:01.745 "data_size": 65536 00:09:01.745 }, 00:09:01.745 { 00:09:01.745 "name": "BaseBdev3", 00:09:01.745 "uuid": "fb1e074f-f6e1-43ee-b7eb-a4e7f50b0003", 00:09:01.745 "is_configured": true, 00:09:01.745 "data_offset": 0, 00:09:01.745 "data_size": 65536 00:09:01.745 } 00:09:01.745 ] 00:09:01.745 }' 00:09:01.745 14:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.745 14:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.314 14:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.314 14:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.314 14:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.314 14:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:02.314 14:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.314 14:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:02.314 14:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:02.314 14:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.314 14:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.314 [2024-12-09 14:41:40.219422] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:02.314 14:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.314 14:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:02.314 14:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.314 14:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.314 14:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:02.314 14:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.314 14:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.314 14:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.314 14:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.314 14:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.314 14:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.314 14:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.314 14:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.314 14:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.314 14:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.314 14:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.314 14:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.314 "name": "Existed_Raid", 00:09:02.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.314 "strip_size_kb": 64, 00:09:02.314 "state": "configuring", 00:09:02.314 "raid_level": "concat", 00:09:02.314 "superblock": false, 00:09:02.314 "num_base_bdevs": 3, 00:09:02.314 "num_base_bdevs_discovered": 1, 00:09:02.314 "num_base_bdevs_operational": 3, 00:09:02.314 "base_bdevs_list": [ 00:09:02.314 { 00:09:02.314 "name": "BaseBdev1", 00:09:02.314 "uuid": "e3e9f0b7-d15e-478f-9f5e-c87ca84bdcf9", 00:09:02.314 "is_configured": true, 00:09:02.314 "data_offset": 0, 00:09:02.314 "data_size": 65536 00:09:02.314 }, 00:09:02.314 { 00:09:02.314 "name": null, 00:09:02.314 "uuid": "2d150767-1217-42b9-aaf3-c3bf1ae1e7bc", 00:09:02.314 "is_configured": false, 00:09:02.314 "data_offset": 0, 00:09:02.314 "data_size": 65536 00:09:02.314 }, 00:09:02.314 { 00:09:02.314 "name": null, 00:09:02.314 "uuid": "fb1e074f-f6e1-43ee-b7eb-a4e7f50b0003", 00:09:02.314 "is_configured": false, 00:09:02.314 "data_offset": 0, 00:09:02.314 "data_size": 65536 00:09:02.314 } 00:09:02.314 ] 00:09:02.314 }' 00:09:02.314 14:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.314 14:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.573 14:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.573 14:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:02.573 14:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.573 14:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.573 14:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.875 14:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:02.875 14:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:02.875 14:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.875 14:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.875 [2024-12-09 14:41:40.710693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:02.875 14:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.875 14:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:02.875 14:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.875 14:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.875 14:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:02.875 14:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.875 14:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.875 14:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.875 14:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.875 14:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.875 14:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.875 14:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.875 14:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.875 14:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.875 14:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.875 14:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.875 14:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.875 "name": "Existed_Raid", 00:09:02.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.875 "strip_size_kb": 64, 00:09:02.875 "state": "configuring", 00:09:02.875 "raid_level": "concat", 00:09:02.875 "superblock": false, 00:09:02.875 "num_base_bdevs": 3, 00:09:02.875 "num_base_bdevs_discovered": 2, 00:09:02.875 "num_base_bdevs_operational": 3, 00:09:02.875 "base_bdevs_list": [ 00:09:02.875 { 00:09:02.875 "name": "BaseBdev1", 00:09:02.875 "uuid": "e3e9f0b7-d15e-478f-9f5e-c87ca84bdcf9", 00:09:02.875 "is_configured": true, 00:09:02.875 "data_offset": 0, 00:09:02.875 "data_size": 65536 00:09:02.875 }, 00:09:02.875 { 00:09:02.875 "name": null, 00:09:02.875 "uuid": "2d150767-1217-42b9-aaf3-c3bf1ae1e7bc", 00:09:02.875 "is_configured": false, 00:09:02.875 "data_offset": 0, 00:09:02.875 "data_size": 65536 00:09:02.875 }, 00:09:02.875 { 00:09:02.875 "name": "BaseBdev3", 00:09:02.875 "uuid": "fb1e074f-f6e1-43ee-b7eb-a4e7f50b0003", 00:09:02.875 "is_configured": true, 00:09:02.875 "data_offset": 0, 00:09:02.875 "data_size": 65536 00:09:02.875 } 00:09:02.875 ] 00:09:02.875 }' 00:09:02.875 14:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.875 14:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.134 14:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:03.134 14:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.134 14:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.134 14:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.134 14:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.134 14:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:03.134 14:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:03.134 14:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.134 14:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.134 [2024-12-09 14:41:41.229815] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:03.393 14:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.393 14:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:03.393 14:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.393 14:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.393 14:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:03.393 14:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.393 14:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.393 14:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.393 14:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.393 14:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.393 14:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.393 14:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.393 14:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.393 14:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.393 14:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.393 14:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.393 14:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.393 "name": "Existed_Raid", 00:09:03.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.393 "strip_size_kb": 64, 00:09:03.393 "state": "configuring", 00:09:03.393 "raid_level": "concat", 00:09:03.393 "superblock": false, 00:09:03.393 "num_base_bdevs": 3, 00:09:03.393 "num_base_bdevs_discovered": 1, 00:09:03.393 "num_base_bdevs_operational": 3, 00:09:03.393 "base_bdevs_list": [ 00:09:03.393 { 00:09:03.393 "name": null, 00:09:03.393 "uuid": "e3e9f0b7-d15e-478f-9f5e-c87ca84bdcf9", 00:09:03.393 "is_configured": false, 00:09:03.393 "data_offset": 0, 00:09:03.393 "data_size": 65536 00:09:03.393 }, 00:09:03.393 { 00:09:03.393 "name": null, 00:09:03.393 "uuid": "2d150767-1217-42b9-aaf3-c3bf1ae1e7bc", 00:09:03.393 "is_configured": false, 00:09:03.393 "data_offset": 0, 00:09:03.393 "data_size": 65536 00:09:03.393 }, 00:09:03.393 { 00:09:03.393 "name": "BaseBdev3", 00:09:03.393 "uuid": "fb1e074f-f6e1-43ee-b7eb-a4e7f50b0003", 00:09:03.393 "is_configured": true, 00:09:03.393 "data_offset": 0, 00:09:03.393 "data_size": 65536 00:09:03.393 } 00:09:03.393 ] 00:09:03.393 }' 00:09:03.393 14:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.393 14:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.960 14:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.960 14:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:03.960 14:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.960 14:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.960 14:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.960 14:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:03.960 14:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:03.960 14:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.960 14:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.960 [2024-12-09 14:41:41.850188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:03.960 14:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.960 14:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:03.960 14:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.960 14:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.960 14:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:03.960 14:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.960 14:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.960 14:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.960 14:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.960 14:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.960 14:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.960 14:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.960 14:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.960 14:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.960 14:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.960 14:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.960 14:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.960 "name": "Existed_Raid", 00:09:03.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.960 "strip_size_kb": 64, 00:09:03.960 "state": "configuring", 00:09:03.960 "raid_level": "concat", 00:09:03.960 "superblock": false, 00:09:03.960 "num_base_bdevs": 3, 00:09:03.960 "num_base_bdevs_discovered": 2, 00:09:03.960 "num_base_bdevs_operational": 3, 00:09:03.960 "base_bdevs_list": [ 00:09:03.960 { 00:09:03.960 "name": null, 00:09:03.960 "uuid": "e3e9f0b7-d15e-478f-9f5e-c87ca84bdcf9", 00:09:03.960 "is_configured": false, 00:09:03.960 "data_offset": 0, 00:09:03.961 "data_size": 65536 00:09:03.961 }, 00:09:03.961 { 00:09:03.961 "name": "BaseBdev2", 00:09:03.961 "uuid": "2d150767-1217-42b9-aaf3-c3bf1ae1e7bc", 00:09:03.961 "is_configured": true, 00:09:03.961 "data_offset": 0, 00:09:03.961 "data_size": 65536 00:09:03.961 }, 00:09:03.961 { 00:09:03.961 "name": "BaseBdev3", 00:09:03.961 "uuid": "fb1e074f-f6e1-43ee-b7eb-a4e7f50b0003", 00:09:03.961 "is_configured": true, 00:09:03.961 "data_offset": 0, 00:09:03.961 "data_size": 65536 00:09:03.961 } 00:09:03.961 ] 00:09:03.961 }' 00:09:03.961 14:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.961 14:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.219 14:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:04.219 14:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.219 14:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.219 14:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.219 14:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.219 14:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:04.219 14:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.219 14:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.219 14:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.219 14:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:04.478 14:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.478 14:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e3e9f0b7-d15e-478f-9f5e-c87ca84bdcf9 00:09:04.478 14:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.478 14:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.478 [2024-12-09 14:41:42.417943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:04.478 [2024-12-09 14:41:42.417989] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:04.478 [2024-12-09 14:41:42.417999] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:04.478 [2024-12-09 14:41:42.418244] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:04.478 [2024-12-09 14:41:42.418383] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:04.478 [2024-12-09 14:41:42.418392] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:04.478 [2024-12-09 14:41:42.418695] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:04.478 NewBaseBdev 00:09:04.478 14:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.478 14:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:04.478 14:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:04.478 14:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:04.478 14:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:04.478 14:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:04.478 14:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:04.478 14:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:04.478 14:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.478 14:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.478 14:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.478 14:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:04.478 14:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.478 14:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.478 [ 00:09:04.478 { 00:09:04.478 "name": "NewBaseBdev", 00:09:04.478 "aliases": [ 00:09:04.478 "e3e9f0b7-d15e-478f-9f5e-c87ca84bdcf9" 00:09:04.478 ], 00:09:04.478 "product_name": "Malloc disk", 00:09:04.478 "block_size": 512, 00:09:04.478 "num_blocks": 65536, 00:09:04.478 "uuid": "e3e9f0b7-d15e-478f-9f5e-c87ca84bdcf9", 00:09:04.478 "assigned_rate_limits": { 00:09:04.478 "rw_ios_per_sec": 0, 00:09:04.478 "rw_mbytes_per_sec": 0, 00:09:04.478 "r_mbytes_per_sec": 0, 00:09:04.478 "w_mbytes_per_sec": 0 00:09:04.478 }, 00:09:04.478 "claimed": true, 00:09:04.478 "claim_type": "exclusive_write", 00:09:04.478 "zoned": false, 00:09:04.478 "supported_io_types": { 00:09:04.478 "read": true, 00:09:04.478 "write": true, 00:09:04.478 "unmap": true, 00:09:04.478 "flush": true, 00:09:04.478 "reset": true, 00:09:04.478 "nvme_admin": false, 00:09:04.478 "nvme_io": false, 00:09:04.478 "nvme_io_md": false, 00:09:04.478 "write_zeroes": true, 00:09:04.478 "zcopy": true, 00:09:04.478 "get_zone_info": false, 00:09:04.478 "zone_management": false, 00:09:04.478 "zone_append": false, 00:09:04.478 "compare": false, 00:09:04.478 "compare_and_write": false, 00:09:04.478 "abort": true, 00:09:04.478 "seek_hole": false, 00:09:04.478 "seek_data": false, 00:09:04.478 "copy": true, 00:09:04.478 "nvme_iov_md": false 00:09:04.478 }, 00:09:04.478 "memory_domains": [ 00:09:04.478 { 00:09:04.478 "dma_device_id": "system", 00:09:04.478 "dma_device_type": 1 00:09:04.478 }, 00:09:04.478 { 00:09:04.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.478 "dma_device_type": 2 00:09:04.478 } 00:09:04.478 ], 00:09:04.478 "driver_specific": {} 00:09:04.478 } 00:09:04.478 ] 00:09:04.478 14:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.478 14:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:04.478 14:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:04.478 14:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.478 14:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:04.478 14:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:04.478 14:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.478 14:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.478 14:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.478 14:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.478 14:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.478 14:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.478 14:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.478 14:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.478 14:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.478 14:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.478 14:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.478 14:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.478 "name": "Existed_Raid", 00:09:04.478 "uuid": "40811516-da6b-4304-a3a3-ed10355f140d", 00:09:04.478 "strip_size_kb": 64, 00:09:04.478 "state": "online", 00:09:04.478 "raid_level": "concat", 00:09:04.478 "superblock": false, 00:09:04.478 "num_base_bdevs": 3, 00:09:04.478 "num_base_bdevs_discovered": 3, 00:09:04.478 "num_base_bdevs_operational": 3, 00:09:04.478 "base_bdevs_list": [ 00:09:04.478 { 00:09:04.478 "name": "NewBaseBdev", 00:09:04.478 "uuid": "e3e9f0b7-d15e-478f-9f5e-c87ca84bdcf9", 00:09:04.479 "is_configured": true, 00:09:04.479 "data_offset": 0, 00:09:04.479 "data_size": 65536 00:09:04.479 }, 00:09:04.479 { 00:09:04.479 "name": "BaseBdev2", 00:09:04.479 "uuid": "2d150767-1217-42b9-aaf3-c3bf1ae1e7bc", 00:09:04.479 "is_configured": true, 00:09:04.479 "data_offset": 0, 00:09:04.479 "data_size": 65536 00:09:04.479 }, 00:09:04.479 { 00:09:04.479 "name": "BaseBdev3", 00:09:04.479 "uuid": "fb1e074f-f6e1-43ee-b7eb-a4e7f50b0003", 00:09:04.479 "is_configured": true, 00:09:04.479 "data_offset": 0, 00:09:04.479 "data_size": 65536 00:09:04.479 } 00:09:04.479 ] 00:09:04.479 }' 00:09:04.479 14:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.479 14:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.048 14:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:05.048 14:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:05.048 14:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:05.048 14:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:05.048 14:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:05.048 14:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:05.048 14:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:05.048 14:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:05.048 14:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.049 14:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.049 [2024-12-09 14:41:42.933435] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:05.049 14:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.049 14:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:05.049 "name": "Existed_Raid", 00:09:05.049 "aliases": [ 00:09:05.049 "40811516-da6b-4304-a3a3-ed10355f140d" 00:09:05.049 ], 00:09:05.049 "product_name": "Raid Volume", 00:09:05.049 "block_size": 512, 00:09:05.049 "num_blocks": 196608, 00:09:05.049 "uuid": "40811516-da6b-4304-a3a3-ed10355f140d", 00:09:05.049 "assigned_rate_limits": { 00:09:05.049 "rw_ios_per_sec": 0, 00:09:05.049 "rw_mbytes_per_sec": 0, 00:09:05.049 "r_mbytes_per_sec": 0, 00:09:05.049 "w_mbytes_per_sec": 0 00:09:05.049 }, 00:09:05.049 "claimed": false, 00:09:05.049 "zoned": false, 00:09:05.049 "supported_io_types": { 00:09:05.049 "read": true, 00:09:05.049 "write": true, 00:09:05.049 "unmap": true, 00:09:05.049 "flush": true, 00:09:05.049 "reset": true, 00:09:05.049 "nvme_admin": false, 00:09:05.049 "nvme_io": false, 00:09:05.049 "nvme_io_md": false, 00:09:05.049 "write_zeroes": true, 00:09:05.049 "zcopy": false, 00:09:05.049 "get_zone_info": false, 00:09:05.049 "zone_management": false, 00:09:05.049 "zone_append": false, 00:09:05.049 "compare": false, 00:09:05.049 "compare_and_write": false, 00:09:05.049 "abort": false, 00:09:05.049 "seek_hole": false, 00:09:05.049 "seek_data": false, 00:09:05.049 "copy": false, 00:09:05.049 "nvme_iov_md": false 00:09:05.049 }, 00:09:05.049 "memory_domains": [ 00:09:05.049 { 00:09:05.049 "dma_device_id": "system", 00:09:05.049 "dma_device_type": 1 00:09:05.049 }, 00:09:05.049 { 00:09:05.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.049 "dma_device_type": 2 00:09:05.049 }, 00:09:05.049 { 00:09:05.049 "dma_device_id": "system", 00:09:05.049 "dma_device_type": 1 00:09:05.049 }, 00:09:05.049 { 00:09:05.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.049 "dma_device_type": 2 00:09:05.049 }, 00:09:05.049 { 00:09:05.049 "dma_device_id": "system", 00:09:05.049 "dma_device_type": 1 00:09:05.049 }, 00:09:05.049 { 00:09:05.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.049 "dma_device_type": 2 00:09:05.049 } 00:09:05.049 ], 00:09:05.049 "driver_specific": { 00:09:05.049 "raid": { 00:09:05.049 "uuid": "40811516-da6b-4304-a3a3-ed10355f140d", 00:09:05.049 "strip_size_kb": 64, 00:09:05.049 "state": "online", 00:09:05.049 "raid_level": "concat", 00:09:05.049 "superblock": false, 00:09:05.049 "num_base_bdevs": 3, 00:09:05.049 "num_base_bdevs_discovered": 3, 00:09:05.049 "num_base_bdevs_operational": 3, 00:09:05.049 "base_bdevs_list": [ 00:09:05.049 { 00:09:05.049 "name": "NewBaseBdev", 00:09:05.049 "uuid": "e3e9f0b7-d15e-478f-9f5e-c87ca84bdcf9", 00:09:05.049 "is_configured": true, 00:09:05.049 "data_offset": 0, 00:09:05.049 "data_size": 65536 00:09:05.049 }, 00:09:05.049 { 00:09:05.049 "name": "BaseBdev2", 00:09:05.049 "uuid": "2d150767-1217-42b9-aaf3-c3bf1ae1e7bc", 00:09:05.049 "is_configured": true, 00:09:05.049 "data_offset": 0, 00:09:05.049 "data_size": 65536 00:09:05.049 }, 00:09:05.049 { 00:09:05.049 "name": "BaseBdev3", 00:09:05.049 "uuid": "fb1e074f-f6e1-43ee-b7eb-a4e7f50b0003", 00:09:05.049 "is_configured": true, 00:09:05.049 "data_offset": 0, 00:09:05.049 "data_size": 65536 00:09:05.049 } 00:09:05.049 ] 00:09:05.049 } 00:09:05.049 } 00:09:05.049 }' 00:09:05.049 14:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:05.049 14:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:05.049 BaseBdev2 00:09:05.049 BaseBdev3' 00:09:05.049 14:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.049 14:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:05.049 14:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.049 14:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:05.049 14:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.049 14:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.049 14:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.049 14:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.049 14:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.049 14:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.049 14:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.049 14:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:05.049 14:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.049 14:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.049 14:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.049 14:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.309 14:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.309 14:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.309 14:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.309 14:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:05.309 14:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.309 14:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.309 14:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.309 14:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.309 14:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.309 14:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.309 14:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:05.309 14:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.309 14:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.309 [2024-12-09 14:41:43.224675] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:05.309 [2024-12-09 14:41:43.224705] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:05.309 [2024-12-09 14:41:43.224799] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:05.309 [2024-12-09 14:41:43.224863] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:05.309 [2024-12-09 14:41:43.224876] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:05.309 14:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.309 14:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 66886 00:09:05.309 14:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 66886 ']' 00:09:05.309 14:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 66886 00:09:05.309 14:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:05.309 14:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.309 14:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66886 00:09:05.309 killing process with pid 66886 00:09:05.309 14:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:05.309 14:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:05.309 14:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66886' 00:09:05.309 14:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 66886 00:09:05.309 [2024-12-09 14:41:43.277464] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:05.309 14:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 66886 00:09:05.568 [2024-12-09 14:41:43.596268] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:06.949 14:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:06.949 00:09:06.949 real 0m10.693s 00:09:06.949 user 0m17.059s 00:09:06.949 sys 0m1.827s 00:09:06.949 14:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.949 14:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.949 ************************************ 00:09:06.949 END TEST raid_state_function_test 00:09:06.949 ************************************ 00:09:06.949 14:41:44 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:06.949 14:41:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:06.949 14:41:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.949 14:41:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:06.949 ************************************ 00:09:06.949 START TEST raid_state_function_test_sb 00:09:06.949 ************************************ 00:09:06.949 14:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:09:06.949 14:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:06.949 14:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:06.949 14:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:06.949 14:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:06.949 14:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:06.949 14:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:06.949 14:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:06.949 14:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:06.949 14:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:06.949 14:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:06.949 14:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:06.949 14:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:06.949 14:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:06.949 14:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:06.949 14:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:06.949 14:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:06.949 14:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:06.949 14:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:06.949 14:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:06.949 14:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:06.949 14:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:06.949 14:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:06.949 14:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:06.949 14:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:06.949 14:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:06.949 14:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:06.949 Process raid pid: 67513 00:09:06.949 14:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=67513 00:09:06.949 14:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:06.949 14:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67513' 00:09:06.949 14:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 67513 00:09:06.949 14:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 67513 ']' 00:09:06.949 14:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.949 14:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:06.949 14:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.949 14:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:06.949 14:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.949 [2024-12-09 14:41:44.917617] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:09:06.949 [2024-12-09 14:41:44.917835] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.209 [2024-12-09 14:41:45.075036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.209 [2024-12-09 14:41:45.189315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.467 [2024-12-09 14:41:45.394345] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:07.467 [2024-12-09 14:41:45.394433] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:07.726 14:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:07.726 14:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:07.726 14:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:07.727 14:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.727 14:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.727 [2024-12-09 14:41:45.748763] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:07.727 [2024-12-09 14:41:45.748885] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:07.727 [2024-12-09 14:41:45.748919] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:07.727 [2024-12-09 14:41:45.748944] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:07.727 [2024-12-09 14:41:45.748963] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:07.727 [2024-12-09 14:41:45.748985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:07.727 14:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.727 14:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:07.727 14:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.727 14:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.727 14:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:07.727 14:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.727 14:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.727 14:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.727 14:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.727 14:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.727 14:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.727 14:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.727 14:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.727 14:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.727 14:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.727 14:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.727 14:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.727 "name": "Existed_Raid", 00:09:07.727 "uuid": "ba06dc7c-531e-4d8a-955b-e8feaac8a09c", 00:09:07.727 "strip_size_kb": 64, 00:09:07.727 "state": "configuring", 00:09:07.727 "raid_level": "concat", 00:09:07.727 "superblock": true, 00:09:07.727 "num_base_bdevs": 3, 00:09:07.727 "num_base_bdevs_discovered": 0, 00:09:07.727 "num_base_bdevs_operational": 3, 00:09:07.727 "base_bdevs_list": [ 00:09:07.727 { 00:09:07.727 "name": "BaseBdev1", 00:09:07.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.727 "is_configured": false, 00:09:07.727 "data_offset": 0, 00:09:07.727 "data_size": 0 00:09:07.727 }, 00:09:07.727 { 00:09:07.727 "name": "BaseBdev2", 00:09:07.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.727 "is_configured": false, 00:09:07.727 "data_offset": 0, 00:09:07.727 "data_size": 0 00:09:07.727 }, 00:09:07.727 { 00:09:07.727 "name": "BaseBdev3", 00:09:07.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.727 "is_configured": false, 00:09:07.727 "data_offset": 0, 00:09:07.727 "data_size": 0 00:09:07.727 } 00:09:07.727 ] 00:09:07.727 }' 00:09:07.727 14:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.727 14:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.296 [2024-12-09 14:41:46.211900] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:08.296 [2024-12-09 14:41:46.211997] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.296 [2024-12-09 14:41:46.223896] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:08.296 [2024-12-09 14:41:46.223988] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:08.296 [2024-12-09 14:41:46.224017] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:08.296 [2024-12-09 14:41:46.224050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:08.296 [2024-12-09 14:41:46.224068] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:08.296 [2024-12-09 14:41:46.224103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.296 [2024-12-09 14:41:46.272093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:08.296 BaseBdev1 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.296 [ 00:09:08.296 { 00:09:08.296 "name": "BaseBdev1", 00:09:08.296 "aliases": [ 00:09:08.296 "88ecf97c-929d-44b1-9b1f-f1438c2bf405" 00:09:08.296 ], 00:09:08.296 "product_name": "Malloc disk", 00:09:08.296 "block_size": 512, 00:09:08.296 "num_blocks": 65536, 00:09:08.296 "uuid": "88ecf97c-929d-44b1-9b1f-f1438c2bf405", 00:09:08.296 "assigned_rate_limits": { 00:09:08.296 "rw_ios_per_sec": 0, 00:09:08.296 "rw_mbytes_per_sec": 0, 00:09:08.296 "r_mbytes_per_sec": 0, 00:09:08.296 "w_mbytes_per_sec": 0 00:09:08.296 }, 00:09:08.296 "claimed": true, 00:09:08.296 "claim_type": "exclusive_write", 00:09:08.296 "zoned": false, 00:09:08.296 "supported_io_types": { 00:09:08.296 "read": true, 00:09:08.296 "write": true, 00:09:08.296 "unmap": true, 00:09:08.296 "flush": true, 00:09:08.296 "reset": true, 00:09:08.296 "nvme_admin": false, 00:09:08.296 "nvme_io": false, 00:09:08.296 "nvme_io_md": false, 00:09:08.296 "write_zeroes": true, 00:09:08.296 "zcopy": true, 00:09:08.296 "get_zone_info": false, 00:09:08.296 "zone_management": false, 00:09:08.296 "zone_append": false, 00:09:08.296 "compare": false, 00:09:08.296 "compare_and_write": false, 00:09:08.296 "abort": true, 00:09:08.296 "seek_hole": false, 00:09:08.296 "seek_data": false, 00:09:08.296 "copy": true, 00:09:08.296 "nvme_iov_md": false 00:09:08.296 }, 00:09:08.296 "memory_domains": [ 00:09:08.296 { 00:09:08.296 "dma_device_id": "system", 00:09:08.296 "dma_device_type": 1 00:09:08.296 }, 00:09:08.296 { 00:09:08.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.296 "dma_device_type": 2 00:09:08.296 } 00:09:08.296 ], 00:09:08.296 "driver_specific": {} 00:09:08.296 } 00:09:08.296 ] 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.296 14:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.296 "name": "Existed_Raid", 00:09:08.296 "uuid": "bb9c2c2c-e5ee-4b59-9649-af5081bf4477", 00:09:08.296 "strip_size_kb": 64, 00:09:08.296 "state": "configuring", 00:09:08.296 "raid_level": "concat", 00:09:08.296 "superblock": true, 00:09:08.296 "num_base_bdevs": 3, 00:09:08.296 "num_base_bdevs_discovered": 1, 00:09:08.296 "num_base_bdevs_operational": 3, 00:09:08.296 "base_bdevs_list": [ 00:09:08.296 { 00:09:08.296 "name": "BaseBdev1", 00:09:08.296 "uuid": "88ecf97c-929d-44b1-9b1f-f1438c2bf405", 00:09:08.296 "is_configured": true, 00:09:08.296 "data_offset": 2048, 00:09:08.296 "data_size": 63488 00:09:08.297 }, 00:09:08.297 { 00:09:08.297 "name": "BaseBdev2", 00:09:08.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.297 "is_configured": false, 00:09:08.297 "data_offset": 0, 00:09:08.297 "data_size": 0 00:09:08.297 }, 00:09:08.297 { 00:09:08.297 "name": "BaseBdev3", 00:09:08.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.297 "is_configured": false, 00:09:08.297 "data_offset": 0, 00:09:08.297 "data_size": 0 00:09:08.297 } 00:09:08.297 ] 00:09:08.297 }' 00:09:08.297 14:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.297 14:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.866 14:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:08.866 14:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.866 14:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.866 [2024-12-09 14:41:46.691500] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:08.866 [2024-12-09 14:41:46.691622] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:08.866 14:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.866 14:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:08.866 14:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.866 14:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.866 [2024-12-09 14:41:46.703527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:08.866 [2024-12-09 14:41:46.705395] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:08.866 [2024-12-09 14:41:46.705476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:08.866 [2024-12-09 14:41:46.705506] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:08.866 [2024-12-09 14:41:46.705530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:08.866 14:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.866 14:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:08.866 14:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:08.866 14:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:08.866 14:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.866 14:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.866 14:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:08.866 14:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.866 14:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.866 14:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.866 14:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.866 14:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.866 14:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.866 14:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.866 14:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.866 14:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.866 14:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.866 14:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.866 14:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.866 "name": "Existed_Raid", 00:09:08.866 "uuid": "f1e8c8cb-9d17-4788-9bfc-2244945d5f11", 00:09:08.866 "strip_size_kb": 64, 00:09:08.866 "state": "configuring", 00:09:08.866 "raid_level": "concat", 00:09:08.866 "superblock": true, 00:09:08.866 "num_base_bdevs": 3, 00:09:08.866 "num_base_bdevs_discovered": 1, 00:09:08.866 "num_base_bdevs_operational": 3, 00:09:08.866 "base_bdevs_list": [ 00:09:08.866 { 00:09:08.866 "name": "BaseBdev1", 00:09:08.866 "uuid": "88ecf97c-929d-44b1-9b1f-f1438c2bf405", 00:09:08.866 "is_configured": true, 00:09:08.866 "data_offset": 2048, 00:09:08.866 "data_size": 63488 00:09:08.866 }, 00:09:08.866 { 00:09:08.866 "name": "BaseBdev2", 00:09:08.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.866 "is_configured": false, 00:09:08.866 "data_offset": 0, 00:09:08.866 "data_size": 0 00:09:08.866 }, 00:09:08.866 { 00:09:08.866 "name": "BaseBdev3", 00:09:08.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.866 "is_configured": false, 00:09:08.866 "data_offset": 0, 00:09:08.866 "data_size": 0 00:09:08.866 } 00:09:08.866 ] 00:09:08.866 }' 00:09:08.866 14:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.866 14:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.126 14:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:09.126 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.126 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.126 [2024-12-09 14:41:47.129023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:09.126 BaseBdev2 00:09:09.126 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.126 14:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:09.126 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:09.126 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:09.126 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:09.126 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:09.126 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:09.126 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:09.126 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.126 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.126 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.126 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:09.126 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.126 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.126 [ 00:09:09.126 { 00:09:09.126 "name": "BaseBdev2", 00:09:09.126 "aliases": [ 00:09:09.126 "d01fa3fb-8e05-45f1-a028-7eb829c3b722" 00:09:09.126 ], 00:09:09.126 "product_name": "Malloc disk", 00:09:09.126 "block_size": 512, 00:09:09.126 "num_blocks": 65536, 00:09:09.126 "uuid": "d01fa3fb-8e05-45f1-a028-7eb829c3b722", 00:09:09.126 "assigned_rate_limits": { 00:09:09.126 "rw_ios_per_sec": 0, 00:09:09.126 "rw_mbytes_per_sec": 0, 00:09:09.126 "r_mbytes_per_sec": 0, 00:09:09.126 "w_mbytes_per_sec": 0 00:09:09.126 }, 00:09:09.126 "claimed": true, 00:09:09.126 "claim_type": "exclusive_write", 00:09:09.126 "zoned": false, 00:09:09.126 "supported_io_types": { 00:09:09.126 "read": true, 00:09:09.126 "write": true, 00:09:09.126 "unmap": true, 00:09:09.126 "flush": true, 00:09:09.126 "reset": true, 00:09:09.126 "nvme_admin": false, 00:09:09.126 "nvme_io": false, 00:09:09.126 "nvme_io_md": false, 00:09:09.126 "write_zeroes": true, 00:09:09.126 "zcopy": true, 00:09:09.126 "get_zone_info": false, 00:09:09.126 "zone_management": false, 00:09:09.126 "zone_append": false, 00:09:09.126 "compare": false, 00:09:09.126 "compare_and_write": false, 00:09:09.126 "abort": true, 00:09:09.126 "seek_hole": false, 00:09:09.126 "seek_data": false, 00:09:09.126 "copy": true, 00:09:09.126 "nvme_iov_md": false 00:09:09.126 }, 00:09:09.126 "memory_domains": [ 00:09:09.126 { 00:09:09.126 "dma_device_id": "system", 00:09:09.126 "dma_device_type": 1 00:09:09.126 }, 00:09:09.126 { 00:09:09.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.126 "dma_device_type": 2 00:09:09.126 } 00:09:09.126 ], 00:09:09.126 "driver_specific": {} 00:09:09.126 } 00:09:09.126 ] 00:09:09.126 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.126 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:09.126 14:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:09.126 14:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:09.126 14:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:09.126 14:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.126 14:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.126 14:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:09.126 14:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.126 14:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.126 14:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.126 14:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.126 14:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.126 14:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.126 14:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.126 14:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.126 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.126 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.126 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.126 14:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.126 "name": "Existed_Raid", 00:09:09.126 "uuid": "f1e8c8cb-9d17-4788-9bfc-2244945d5f11", 00:09:09.126 "strip_size_kb": 64, 00:09:09.126 "state": "configuring", 00:09:09.126 "raid_level": "concat", 00:09:09.126 "superblock": true, 00:09:09.126 "num_base_bdevs": 3, 00:09:09.126 "num_base_bdevs_discovered": 2, 00:09:09.126 "num_base_bdevs_operational": 3, 00:09:09.126 "base_bdevs_list": [ 00:09:09.126 { 00:09:09.126 "name": "BaseBdev1", 00:09:09.126 "uuid": "88ecf97c-929d-44b1-9b1f-f1438c2bf405", 00:09:09.126 "is_configured": true, 00:09:09.126 "data_offset": 2048, 00:09:09.126 "data_size": 63488 00:09:09.126 }, 00:09:09.126 { 00:09:09.126 "name": "BaseBdev2", 00:09:09.126 "uuid": "d01fa3fb-8e05-45f1-a028-7eb829c3b722", 00:09:09.126 "is_configured": true, 00:09:09.126 "data_offset": 2048, 00:09:09.126 "data_size": 63488 00:09:09.126 }, 00:09:09.126 { 00:09:09.126 "name": "BaseBdev3", 00:09:09.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.126 "is_configured": false, 00:09:09.126 "data_offset": 0, 00:09:09.126 "data_size": 0 00:09:09.126 } 00:09:09.126 ] 00:09:09.126 }' 00:09:09.126 14:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.126 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.696 14:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:09.696 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.696 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.696 [2024-12-09 14:41:47.653720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:09.696 [2024-12-09 14:41:47.653983] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:09.696 [2024-12-09 14:41:47.654003] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:09.696 [2024-12-09 14:41:47.654388] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:09.696 BaseBdev3 00:09:09.696 [2024-12-09 14:41:47.654556] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:09.696 [2024-12-09 14:41:47.654567] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:09.696 [2024-12-09 14:41:47.654729] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:09.696 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.696 14:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:09.696 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:09.696 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:09.696 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:09.696 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:09.696 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:09.696 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:09.696 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.696 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.696 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.696 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:09.696 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.696 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.696 [ 00:09:09.696 { 00:09:09.696 "name": "BaseBdev3", 00:09:09.696 "aliases": [ 00:09:09.696 "c71fb68c-0224-4b79-a806-ea119dd1f0e8" 00:09:09.696 ], 00:09:09.696 "product_name": "Malloc disk", 00:09:09.696 "block_size": 512, 00:09:09.696 "num_blocks": 65536, 00:09:09.696 "uuid": "c71fb68c-0224-4b79-a806-ea119dd1f0e8", 00:09:09.696 "assigned_rate_limits": { 00:09:09.696 "rw_ios_per_sec": 0, 00:09:09.696 "rw_mbytes_per_sec": 0, 00:09:09.696 "r_mbytes_per_sec": 0, 00:09:09.696 "w_mbytes_per_sec": 0 00:09:09.696 }, 00:09:09.696 "claimed": true, 00:09:09.696 "claim_type": "exclusive_write", 00:09:09.696 "zoned": false, 00:09:09.696 "supported_io_types": { 00:09:09.696 "read": true, 00:09:09.696 "write": true, 00:09:09.696 "unmap": true, 00:09:09.696 "flush": true, 00:09:09.696 "reset": true, 00:09:09.696 "nvme_admin": false, 00:09:09.696 "nvme_io": false, 00:09:09.696 "nvme_io_md": false, 00:09:09.696 "write_zeroes": true, 00:09:09.696 "zcopy": true, 00:09:09.696 "get_zone_info": false, 00:09:09.696 "zone_management": false, 00:09:09.696 "zone_append": false, 00:09:09.696 "compare": false, 00:09:09.696 "compare_and_write": false, 00:09:09.696 "abort": true, 00:09:09.696 "seek_hole": false, 00:09:09.696 "seek_data": false, 00:09:09.696 "copy": true, 00:09:09.696 "nvme_iov_md": false 00:09:09.696 }, 00:09:09.696 "memory_domains": [ 00:09:09.696 { 00:09:09.696 "dma_device_id": "system", 00:09:09.696 "dma_device_type": 1 00:09:09.696 }, 00:09:09.696 { 00:09:09.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.696 "dma_device_type": 2 00:09:09.696 } 00:09:09.696 ], 00:09:09.696 "driver_specific": {} 00:09:09.696 } 00:09:09.696 ] 00:09:09.696 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.696 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:09.696 14:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:09.696 14:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:09.696 14:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:09.696 14:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.696 14:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:09.696 14:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:09.696 14:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.696 14:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.696 14:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.696 14:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.696 14:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.696 14:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.696 14:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.696 14:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.696 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.696 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.696 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.696 14:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.696 "name": "Existed_Raid", 00:09:09.696 "uuid": "f1e8c8cb-9d17-4788-9bfc-2244945d5f11", 00:09:09.696 "strip_size_kb": 64, 00:09:09.696 "state": "online", 00:09:09.696 "raid_level": "concat", 00:09:09.696 "superblock": true, 00:09:09.696 "num_base_bdevs": 3, 00:09:09.696 "num_base_bdevs_discovered": 3, 00:09:09.696 "num_base_bdevs_operational": 3, 00:09:09.697 "base_bdevs_list": [ 00:09:09.697 { 00:09:09.697 "name": "BaseBdev1", 00:09:09.697 "uuid": "88ecf97c-929d-44b1-9b1f-f1438c2bf405", 00:09:09.697 "is_configured": true, 00:09:09.697 "data_offset": 2048, 00:09:09.697 "data_size": 63488 00:09:09.697 }, 00:09:09.697 { 00:09:09.697 "name": "BaseBdev2", 00:09:09.697 "uuid": "d01fa3fb-8e05-45f1-a028-7eb829c3b722", 00:09:09.697 "is_configured": true, 00:09:09.697 "data_offset": 2048, 00:09:09.697 "data_size": 63488 00:09:09.697 }, 00:09:09.697 { 00:09:09.697 "name": "BaseBdev3", 00:09:09.697 "uuid": "c71fb68c-0224-4b79-a806-ea119dd1f0e8", 00:09:09.697 "is_configured": true, 00:09:09.697 "data_offset": 2048, 00:09:09.697 "data_size": 63488 00:09:09.697 } 00:09:09.697 ] 00:09:09.697 }' 00:09:09.697 14:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.697 14:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.266 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:10.266 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:10.266 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:10.266 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:10.266 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:10.266 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:10.266 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:10.267 14:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.267 14:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.267 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:10.267 [2024-12-09 14:41:48.181207] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:10.267 14:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.267 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:10.267 "name": "Existed_Raid", 00:09:10.267 "aliases": [ 00:09:10.267 "f1e8c8cb-9d17-4788-9bfc-2244945d5f11" 00:09:10.267 ], 00:09:10.267 "product_name": "Raid Volume", 00:09:10.267 "block_size": 512, 00:09:10.267 "num_blocks": 190464, 00:09:10.267 "uuid": "f1e8c8cb-9d17-4788-9bfc-2244945d5f11", 00:09:10.267 "assigned_rate_limits": { 00:09:10.267 "rw_ios_per_sec": 0, 00:09:10.267 "rw_mbytes_per_sec": 0, 00:09:10.267 "r_mbytes_per_sec": 0, 00:09:10.267 "w_mbytes_per_sec": 0 00:09:10.267 }, 00:09:10.267 "claimed": false, 00:09:10.267 "zoned": false, 00:09:10.267 "supported_io_types": { 00:09:10.267 "read": true, 00:09:10.267 "write": true, 00:09:10.267 "unmap": true, 00:09:10.267 "flush": true, 00:09:10.267 "reset": true, 00:09:10.267 "nvme_admin": false, 00:09:10.267 "nvme_io": false, 00:09:10.267 "nvme_io_md": false, 00:09:10.267 "write_zeroes": true, 00:09:10.267 "zcopy": false, 00:09:10.267 "get_zone_info": false, 00:09:10.267 "zone_management": false, 00:09:10.267 "zone_append": false, 00:09:10.267 "compare": false, 00:09:10.267 "compare_and_write": false, 00:09:10.267 "abort": false, 00:09:10.267 "seek_hole": false, 00:09:10.267 "seek_data": false, 00:09:10.267 "copy": false, 00:09:10.267 "nvme_iov_md": false 00:09:10.267 }, 00:09:10.267 "memory_domains": [ 00:09:10.267 { 00:09:10.267 "dma_device_id": "system", 00:09:10.267 "dma_device_type": 1 00:09:10.267 }, 00:09:10.267 { 00:09:10.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.267 "dma_device_type": 2 00:09:10.267 }, 00:09:10.267 { 00:09:10.267 "dma_device_id": "system", 00:09:10.267 "dma_device_type": 1 00:09:10.267 }, 00:09:10.267 { 00:09:10.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.267 "dma_device_type": 2 00:09:10.267 }, 00:09:10.267 { 00:09:10.267 "dma_device_id": "system", 00:09:10.267 "dma_device_type": 1 00:09:10.267 }, 00:09:10.267 { 00:09:10.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.267 "dma_device_type": 2 00:09:10.267 } 00:09:10.267 ], 00:09:10.267 "driver_specific": { 00:09:10.267 "raid": { 00:09:10.267 "uuid": "f1e8c8cb-9d17-4788-9bfc-2244945d5f11", 00:09:10.267 "strip_size_kb": 64, 00:09:10.267 "state": "online", 00:09:10.267 "raid_level": "concat", 00:09:10.267 "superblock": true, 00:09:10.267 "num_base_bdevs": 3, 00:09:10.267 "num_base_bdevs_discovered": 3, 00:09:10.267 "num_base_bdevs_operational": 3, 00:09:10.267 "base_bdevs_list": [ 00:09:10.267 { 00:09:10.267 "name": "BaseBdev1", 00:09:10.267 "uuid": "88ecf97c-929d-44b1-9b1f-f1438c2bf405", 00:09:10.267 "is_configured": true, 00:09:10.267 "data_offset": 2048, 00:09:10.267 "data_size": 63488 00:09:10.267 }, 00:09:10.267 { 00:09:10.267 "name": "BaseBdev2", 00:09:10.267 "uuid": "d01fa3fb-8e05-45f1-a028-7eb829c3b722", 00:09:10.267 "is_configured": true, 00:09:10.267 "data_offset": 2048, 00:09:10.267 "data_size": 63488 00:09:10.267 }, 00:09:10.267 { 00:09:10.267 "name": "BaseBdev3", 00:09:10.267 "uuid": "c71fb68c-0224-4b79-a806-ea119dd1f0e8", 00:09:10.267 "is_configured": true, 00:09:10.267 "data_offset": 2048, 00:09:10.267 "data_size": 63488 00:09:10.267 } 00:09:10.267 ] 00:09:10.267 } 00:09:10.267 } 00:09:10.267 }' 00:09:10.267 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:10.267 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:10.267 BaseBdev2 00:09:10.267 BaseBdev3' 00:09:10.267 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:10.267 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:10.267 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:10.267 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:10.267 14:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.267 14:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.267 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:10.267 14:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.267 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:10.267 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:10.267 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:10.267 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:10.267 14:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.267 14:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.267 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:10.267 14:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.267 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:10.267 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:10.267 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:10.527 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:10.527 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:10.527 14:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.527 14:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.527 14:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.527 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:10.527 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:10.527 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:10.527 14:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.527 14:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.527 [2024-12-09 14:41:48.440526] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:10.527 [2024-12-09 14:41:48.440557] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:10.527 [2024-12-09 14:41:48.440632] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:10.527 14:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.527 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:10.527 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:10.527 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:10.527 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:10.527 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:10.527 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:10.528 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.528 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:10.528 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.528 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.528 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:10.528 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.528 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.528 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.528 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.528 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.528 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.528 14:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.528 14:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.528 14:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.528 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.528 "name": "Existed_Raid", 00:09:10.528 "uuid": "f1e8c8cb-9d17-4788-9bfc-2244945d5f11", 00:09:10.528 "strip_size_kb": 64, 00:09:10.528 "state": "offline", 00:09:10.528 "raid_level": "concat", 00:09:10.528 "superblock": true, 00:09:10.528 "num_base_bdevs": 3, 00:09:10.528 "num_base_bdevs_discovered": 2, 00:09:10.528 "num_base_bdevs_operational": 2, 00:09:10.528 "base_bdevs_list": [ 00:09:10.528 { 00:09:10.528 "name": null, 00:09:10.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.528 "is_configured": false, 00:09:10.528 "data_offset": 0, 00:09:10.528 "data_size": 63488 00:09:10.528 }, 00:09:10.528 { 00:09:10.528 "name": "BaseBdev2", 00:09:10.528 "uuid": "d01fa3fb-8e05-45f1-a028-7eb829c3b722", 00:09:10.528 "is_configured": true, 00:09:10.528 "data_offset": 2048, 00:09:10.528 "data_size": 63488 00:09:10.528 }, 00:09:10.528 { 00:09:10.528 "name": "BaseBdev3", 00:09:10.528 "uuid": "c71fb68c-0224-4b79-a806-ea119dd1f0e8", 00:09:10.528 "is_configured": true, 00:09:10.528 "data_offset": 2048, 00:09:10.528 "data_size": 63488 00:09:10.528 } 00:09:10.528 ] 00:09:10.528 }' 00:09:10.528 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.528 14:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.096 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:11.096 14:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:11.096 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.096 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.096 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.096 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:11.096 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.096 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:11.096 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:11.096 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:11.096 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.096 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.096 [2024-12-09 14:41:49.060255] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:11.096 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.096 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:11.096 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:11.096 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:11.096 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.096 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.096 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.096 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.096 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:11.096 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:11.096 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:11.096 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.096 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.096 [2024-12-09 14:41:49.210318] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:11.096 [2024-12-09 14:41:49.210372] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:11.356 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.356 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:11.356 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:11.356 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:11.356 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.356 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.356 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.356 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.356 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:11.356 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:11.356 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:11.356 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:11.356 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:11.356 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:11.356 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.356 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.356 BaseBdev2 00:09:11.356 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.356 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:11.357 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:11.357 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:11.357 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:11.357 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:11.357 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:11.357 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:11.357 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.357 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.357 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.357 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:11.357 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.357 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.357 [ 00:09:11.357 { 00:09:11.357 "name": "BaseBdev2", 00:09:11.357 "aliases": [ 00:09:11.357 "964565a4-e459-46fe-ba37-19dbe94d8854" 00:09:11.357 ], 00:09:11.357 "product_name": "Malloc disk", 00:09:11.357 "block_size": 512, 00:09:11.357 "num_blocks": 65536, 00:09:11.357 "uuid": "964565a4-e459-46fe-ba37-19dbe94d8854", 00:09:11.357 "assigned_rate_limits": { 00:09:11.357 "rw_ios_per_sec": 0, 00:09:11.357 "rw_mbytes_per_sec": 0, 00:09:11.357 "r_mbytes_per_sec": 0, 00:09:11.357 "w_mbytes_per_sec": 0 00:09:11.357 }, 00:09:11.357 "claimed": false, 00:09:11.357 "zoned": false, 00:09:11.357 "supported_io_types": { 00:09:11.357 "read": true, 00:09:11.357 "write": true, 00:09:11.357 "unmap": true, 00:09:11.357 "flush": true, 00:09:11.357 "reset": true, 00:09:11.357 "nvme_admin": false, 00:09:11.357 "nvme_io": false, 00:09:11.357 "nvme_io_md": false, 00:09:11.357 "write_zeroes": true, 00:09:11.357 "zcopy": true, 00:09:11.357 "get_zone_info": false, 00:09:11.357 "zone_management": false, 00:09:11.357 "zone_append": false, 00:09:11.357 "compare": false, 00:09:11.357 "compare_and_write": false, 00:09:11.357 "abort": true, 00:09:11.357 "seek_hole": false, 00:09:11.357 "seek_data": false, 00:09:11.357 "copy": true, 00:09:11.357 "nvme_iov_md": false 00:09:11.357 }, 00:09:11.357 "memory_domains": [ 00:09:11.357 { 00:09:11.357 "dma_device_id": "system", 00:09:11.357 "dma_device_type": 1 00:09:11.357 }, 00:09:11.357 { 00:09:11.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.357 "dma_device_type": 2 00:09:11.357 } 00:09:11.357 ], 00:09:11.357 "driver_specific": {} 00:09:11.357 } 00:09:11.357 ] 00:09:11.357 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.357 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:11.357 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:11.357 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:11.357 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:11.357 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.357 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.357 BaseBdev3 00:09:11.357 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.357 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:11.357 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:11.357 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:11.357 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:11.357 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:11.357 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:11.357 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:11.357 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.357 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.357 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.357 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:11.357 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.357 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.617 [ 00:09:11.617 { 00:09:11.617 "name": "BaseBdev3", 00:09:11.617 "aliases": [ 00:09:11.617 "ae7fa802-c7b6-425d-ab83-1a6f6c86718a" 00:09:11.617 ], 00:09:11.617 "product_name": "Malloc disk", 00:09:11.617 "block_size": 512, 00:09:11.617 "num_blocks": 65536, 00:09:11.617 "uuid": "ae7fa802-c7b6-425d-ab83-1a6f6c86718a", 00:09:11.617 "assigned_rate_limits": { 00:09:11.617 "rw_ios_per_sec": 0, 00:09:11.617 "rw_mbytes_per_sec": 0, 00:09:11.617 "r_mbytes_per_sec": 0, 00:09:11.617 "w_mbytes_per_sec": 0 00:09:11.617 }, 00:09:11.617 "claimed": false, 00:09:11.617 "zoned": false, 00:09:11.617 "supported_io_types": { 00:09:11.617 "read": true, 00:09:11.617 "write": true, 00:09:11.617 "unmap": true, 00:09:11.617 "flush": true, 00:09:11.617 "reset": true, 00:09:11.617 "nvme_admin": false, 00:09:11.617 "nvme_io": false, 00:09:11.617 "nvme_io_md": false, 00:09:11.617 "write_zeroes": true, 00:09:11.617 "zcopy": true, 00:09:11.617 "get_zone_info": false, 00:09:11.617 "zone_management": false, 00:09:11.617 "zone_append": false, 00:09:11.617 "compare": false, 00:09:11.617 "compare_and_write": false, 00:09:11.617 "abort": true, 00:09:11.617 "seek_hole": false, 00:09:11.617 "seek_data": false, 00:09:11.617 "copy": true, 00:09:11.617 "nvme_iov_md": false 00:09:11.617 }, 00:09:11.617 "memory_domains": [ 00:09:11.617 { 00:09:11.617 "dma_device_id": "system", 00:09:11.617 "dma_device_type": 1 00:09:11.617 }, 00:09:11.617 { 00:09:11.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.617 "dma_device_type": 2 00:09:11.617 } 00:09:11.617 ], 00:09:11.617 "driver_specific": {} 00:09:11.617 } 00:09:11.617 ] 00:09:11.617 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.617 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:11.617 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:11.617 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:11.617 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:11.617 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.617 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.617 [2024-12-09 14:41:49.505788] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:11.617 [2024-12-09 14:41:49.505901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:11.617 [2024-12-09 14:41:49.505960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:11.617 [2024-12-09 14:41:49.507790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:11.617 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.617 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:11.617 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.617 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.617 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.617 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.617 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.617 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.617 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.617 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.617 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.617 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.617 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.617 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.617 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.617 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.618 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.618 "name": "Existed_Raid", 00:09:11.618 "uuid": "c9438543-d427-41a4-a5b3-1100fdbaa0db", 00:09:11.618 "strip_size_kb": 64, 00:09:11.618 "state": "configuring", 00:09:11.618 "raid_level": "concat", 00:09:11.618 "superblock": true, 00:09:11.618 "num_base_bdevs": 3, 00:09:11.618 "num_base_bdevs_discovered": 2, 00:09:11.618 "num_base_bdevs_operational": 3, 00:09:11.618 "base_bdevs_list": [ 00:09:11.618 { 00:09:11.618 "name": "BaseBdev1", 00:09:11.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.618 "is_configured": false, 00:09:11.618 "data_offset": 0, 00:09:11.618 "data_size": 0 00:09:11.618 }, 00:09:11.618 { 00:09:11.618 "name": "BaseBdev2", 00:09:11.618 "uuid": "964565a4-e459-46fe-ba37-19dbe94d8854", 00:09:11.618 "is_configured": true, 00:09:11.618 "data_offset": 2048, 00:09:11.618 "data_size": 63488 00:09:11.618 }, 00:09:11.618 { 00:09:11.618 "name": "BaseBdev3", 00:09:11.618 "uuid": "ae7fa802-c7b6-425d-ab83-1a6f6c86718a", 00:09:11.618 "is_configured": true, 00:09:11.618 "data_offset": 2048, 00:09:11.618 "data_size": 63488 00:09:11.618 } 00:09:11.618 ] 00:09:11.618 }' 00:09:11.618 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.618 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.877 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:11.877 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.877 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.877 [2024-12-09 14:41:49.957082] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:11.877 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.877 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:11.877 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.877 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.877 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.877 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.877 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.877 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.877 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.877 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.877 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.877 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.877 14:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.877 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.877 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.877 14:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.137 14:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.137 "name": "Existed_Raid", 00:09:12.137 "uuid": "c9438543-d427-41a4-a5b3-1100fdbaa0db", 00:09:12.137 "strip_size_kb": 64, 00:09:12.137 "state": "configuring", 00:09:12.137 "raid_level": "concat", 00:09:12.137 "superblock": true, 00:09:12.137 "num_base_bdevs": 3, 00:09:12.137 "num_base_bdevs_discovered": 1, 00:09:12.137 "num_base_bdevs_operational": 3, 00:09:12.137 "base_bdevs_list": [ 00:09:12.137 { 00:09:12.137 "name": "BaseBdev1", 00:09:12.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.137 "is_configured": false, 00:09:12.137 "data_offset": 0, 00:09:12.137 "data_size": 0 00:09:12.137 }, 00:09:12.137 { 00:09:12.137 "name": null, 00:09:12.137 "uuid": "964565a4-e459-46fe-ba37-19dbe94d8854", 00:09:12.137 "is_configured": false, 00:09:12.137 "data_offset": 0, 00:09:12.137 "data_size": 63488 00:09:12.137 }, 00:09:12.137 { 00:09:12.137 "name": "BaseBdev3", 00:09:12.137 "uuid": "ae7fa802-c7b6-425d-ab83-1a6f6c86718a", 00:09:12.137 "is_configured": true, 00:09:12.137 "data_offset": 2048, 00:09:12.137 "data_size": 63488 00:09:12.137 } 00:09:12.137 ] 00:09:12.137 }' 00:09:12.137 14:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.137 14:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.398 14:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:12.398 14:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.398 14:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.398 14:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.398 14:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.398 14:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:12.398 14:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:12.398 14:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.398 14:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.398 [2024-12-09 14:41:50.452747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:12.398 BaseBdev1 00:09:12.398 14:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.398 14:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:12.398 14:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:12.398 14:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:12.398 14:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:12.398 14:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:12.398 14:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:12.398 14:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:12.398 14:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.398 14:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.398 14:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.398 14:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:12.398 14:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.398 14:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.398 [ 00:09:12.398 { 00:09:12.398 "name": "BaseBdev1", 00:09:12.398 "aliases": [ 00:09:12.398 "9bbcaef6-7cd7-46bb-b8dd-ea73cdec59be" 00:09:12.398 ], 00:09:12.398 "product_name": "Malloc disk", 00:09:12.398 "block_size": 512, 00:09:12.398 "num_blocks": 65536, 00:09:12.398 "uuid": "9bbcaef6-7cd7-46bb-b8dd-ea73cdec59be", 00:09:12.398 "assigned_rate_limits": { 00:09:12.398 "rw_ios_per_sec": 0, 00:09:12.398 "rw_mbytes_per_sec": 0, 00:09:12.398 "r_mbytes_per_sec": 0, 00:09:12.398 "w_mbytes_per_sec": 0 00:09:12.398 }, 00:09:12.398 "claimed": true, 00:09:12.398 "claim_type": "exclusive_write", 00:09:12.398 "zoned": false, 00:09:12.398 "supported_io_types": { 00:09:12.398 "read": true, 00:09:12.398 "write": true, 00:09:12.398 "unmap": true, 00:09:12.398 "flush": true, 00:09:12.398 "reset": true, 00:09:12.398 "nvme_admin": false, 00:09:12.398 "nvme_io": false, 00:09:12.398 "nvme_io_md": false, 00:09:12.398 "write_zeroes": true, 00:09:12.398 "zcopy": true, 00:09:12.398 "get_zone_info": false, 00:09:12.398 "zone_management": false, 00:09:12.398 "zone_append": false, 00:09:12.398 "compare": false, 00:09:12.398 "compare_and_write": false, 00:09:12.398 "abort": true, 00:09:12.398 "seek_hole": false, 00:09:12.398 "seek_data": false, 00:09:12.398 "copy": true, 00:09:12.398 "nvme_iov_md": false 00:09:12.398 }, 00:09:12.398 "memory_domains": [ 00:09:12.398 { 00:09:12.398 "dma_device_id": "system", 00:09:12.398 "dma_device_type": 1 00:09:12.398 }, 00:09:12.398 { 00:09:12.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.398 "dma_device_type": 2 00:09:12.398 } 00:09:12.398 ], 00:09:12.398 "driver_specific": {} 00:09:12.398 } 00:09:12.398 ] 00:09:12.398 14:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.398 14:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:12.398 14:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:12.398 14:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.398 14:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.398 14:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:12.398 14:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.398 14:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.398 14:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.398 14:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.398 14:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.398 14:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.398 14:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.398 14:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.398 14:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.398 14:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.398 14:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.658 14:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.658 "name": "Existed_Raid", 00:09:12.658 "uuid": "c9438543-d427-41a4-a5b3-1100fdbaa0db", 00:09:12.658 "strip_size_kb": 64, 00:09:12.658 "state": "configuring", 00:09:12.658 "raid_level": "concat", 00:09:12.658 "superblock": true, 00:09:12.658 "num_base_bdevs": 3, 00:09:12.658 "num_base_bdevs_discovered": 2, 00:09:12.658 "num_base_bdevs_operational": 3, 00:09:12.658 "base_bdevs_list": [ 00:09:12.658 { 00:09:12.658 "name": "BaseBdev1", 00:09:12.658 "uuid": "9bbcaef6-7cd7-46bb-b8dd-ea73cdec59be", 00:09:12.658 "is_configured": true, 00:09:12.658 "data_offset": 2048, 00:09:12.658 "data_size": 63488 00:09:12.658 }, 00:09:12.658 { 00:09:12.658 "name": null, 00:09:12.658 "uuid": "964565a4-e459-46fe-ba37-19dbe94d8854", 00:09:12.658 "is_configured": false, 00:09:12.658 "data_offset": 0, 00:09:12.658 "data_size": 63488 00:09:12.658 }, 00:09:12.658 { 00:09:12.658 "name": "BaseBdev3", 00:09:12.658 "uuid": "ae7fa802-c7b6-425d-ab83-1a6f6c86718a", 00:09:12.658 "is_configured": true, 00:09:12.658 "data_offset": 2048, 00:09:12.658 "data_size": 63488 00:09:12.658 } 00:09:12.658 ] 00:09:12.658 }' 00:09:12.658 14:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.658 14:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.918 14:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.918 14:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.918 14:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:12.918 14:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.918 14:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.918 14:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:12.918 14:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:12.918 14:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.918 14:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.918 [2024-12-09 14:41:51.011882] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:12.918 14:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.918 14:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:12.918 14:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.918 14:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.918 14:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:12.918 14:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.918 14:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.918 14:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.918 14:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.918 14:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.918 14:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.919 14:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.919 14:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.919 14:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.919 14:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.919 14:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.177 14:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.177 "name": "Existed_Raid", 00:09:13.177 "uuid": "c9438543-d427-41a4-a5b3-1100fdbaa0db", 00:09:13.177 "strip_size_kb": 64, 00:09:13.177 "state": "configuring", 00:09:13.177 "raid_level": "concat", 00:09:13.177 "superblock": true, 00:09:13.177 "num_base_bdevs": 3, 00:09:13.177 "num_base_bdevs_discovered": 1, 00:09:13.177 "num_base_bdevs_operational": 3, 00:09:13.177 "base_bdevs_list": [ 00:09:13.177 { 00:09:13.177 "name": "BaseBdev1", 00:09:13.177 "uuid": "9bbcaef6-7cd7-46bb-b8dd-ea73cdec59be", 00:09:13.177 "is_configured": true, 00:09:13.177 "data_offset": 2048, 00:09:13.177 "data_size": 63488 00:09:13.177 }, 00:09:13.177 { 00:09:13.177 "name": null, 00:09:13.177 "uuid": "964565a4-e459-46fe-ba37-19dbe94d8854", 00:09:13.177 "is_configured": false, 00:09:13.177 "data_offset": 0, 00:09:13.177 "data_size": 63488 00:09:13.177 }, 00:09:13.177 { 00:09:13.177 "name": null, 00:09:13.177 "uuid": "ae7fa802-c7b6-425d-ab83-1a6f6c86718a", 00:09:13.177 "is_configured": false, 00:09:13.177 "data_offset": 0, 00:09:13.177 "data_size": 63488 00:09:13.177 } 00:09:13.177 ] 00:09:13.177 }' 00:09:13.177 14:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.177 14:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.436 14:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.436 14:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.436 14:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.436 14:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:13.436 14:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.436 14:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:13.436 14:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:13.436 14:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.436 14:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.436 [2024-12-09 14:41:51.427232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:13.436 14:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.436 14:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:13.436 14:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.436 14:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.436 14:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:13.436 14:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.436 14:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.436 14:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.436 14:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.436 14:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.436 14:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.436 14:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.436 14:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.436 14:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.436 14:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.436 14:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.436 14:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.436 "name": "Existed_Raid", 00:09:13.436 "uuid": "c9438543-d427-41a4-a5b3-1100fdbaa0db", 00:09:13.436 "strip_size_kb": 64, 00:09:13.436 "state": "configuring", 00:09:13.436 "raid_level": "concat", 00:09:13.436 "superblock": true, 00:09:13.436 "num_base_bdevs": 3, 00:09:13.436 "num_base_bdevs_discovered": 2, 00:09:13.436 "num_base_bdevs_operational": 3, 00:09:13.436 "base_bdevs_list": [ 00:09:13.436 { 00:09:13.436 "name": "BaseBdev1", 00:09:13.436 "uuid": "9bbcaef6-7cd7-46bb-b8dd-ea73cdec59be", 00:09:13.436 "is_configured": true, 00:09:13.436 "data_offset": 2048, 00:09:13.436 "data_size": 63488 00:09:13.436 }, 00:09:13.436 { 00:09:13.436 "name": null, 00:09:13.436 "uuid": "964565a4-e459-46fe-ba37-19dbe94d8854", 00:09:13.436 "is_configured": false, 00:09:13.436 "data_offset": 0, 00:09:13.436 "data_size": 63488 00:09:13.436 }, 00:09:13.436 { 00:09:13.436 "name": "BaseBdev3", 00:09:13.436 "uuid": "ae7fa802-c7b6-425d-ab83-1a6f6c86718a", 00:09:13.436 "is_configured": true, 00:09:13.436 "data_offset": 2048, 00:09:13.436 "data_size": 63488 00:09:13.436 } 00:09:13.436 ] 00:09:13.436 }' 00:09:13.436 14:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.436 14:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.016 14:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.016 14:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:14.016 14:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.016 14:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.016 14:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.016 14:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:14.016 14:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:14.016 14:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.016 14:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.016 [2024-12-09 14:41:51.958379] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:14.016 14:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.016 14:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:14.016 14:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.016 14:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.016 14:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.016 14:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.016 14:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.016 14:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.016 14:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.016 14:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.016 14:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.016 14:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.016 14:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.016 14:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.016 14:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.016 14:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.016 14:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.016 "name": "Existed_Raid", 00:09:14.016 "uuid": "c9438543-d427-41a4-a5b3-1100fdbaa0db", 00:09:14.016 "strip_size_kb": 64, 00:09:14.016 "state": "configuring", 00:09:14.016 "raid_level": "concat", 00:09:14.016 "superblock": true, 00:09:14.016 "num_base_bdevs": 3, 00:09:14.017 "num_base_bdevs_discovered": 1, 00:09:14.017 "num_base_bdevs_operational": 3, 00:09:14.017 "base_bdevs_list": [ 00:09:14.017 { 00:09:14.017 "name": null, 00:09:14.017 "uuid": "9bbcaef6-7cd7-46bb-b8dd-ea73cdec59be", 00:09:14.017 "is_configured": false, 00:09:14.017 "data_offset": 0, 00:09:14.017 "data_size": 63488 00:09:14.017 }, 00:09:14.017 { 00:09:14.017 "name": null, 00:09:14.017 "uuid": "964565a4-e459-46fe-ba37-19dbe94d8854", 00:09:14.017 "is_configured": false, 00:09:14.017 "data_offset": 0, 00:09:14.017 "data_size": 63488 00:09:14.017 }, 00:09:14.017 { 00:09:14.017 "name": "BaseBdev3", 00:09:14.017 "uuid": "ae7fa802-c7b6-425d-ab83-1a6f6c86718a", 00:09:14.017 "is_configured": true, 00:09:14.017 "data_offset": 2048, 00:09:14.017 "data_size": 63488 00:09:14.017 } 00:09:14.017 ] 00:09:14.017 }' 00:09:14.017 14:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.017 14:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.584 14:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.584 14:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:14.584 14:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.584 14:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.584 14:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.584 14:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:14.584 14:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:14.584 14:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.584 14:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.584 [2024-12-09 14:41:52.572468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:14.584 14:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.584 14:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:14.584 14:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.584 14:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.584 14:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.584 14:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.584 14:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.584 14:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.584 14:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.585 14:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.585 14:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.585 14:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.585 14:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.585 14:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.585 14:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.585 14:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.585 14:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.585 "name": "Existed_Raid", 00:09:14.585 "uuid": "c9438543-d427-41a4-a5b3-1100fdbaa0db", 00:09:14.585 "strip_size_kb": 64, 00:09:14.585 "state": "configuring", 00:09:14.585 "raid_level": "concat", 00:09:14.585 "superblock": true, 00:09:14.585 "num_base_bdevs": 3, 00:09:14.585 "num_base_bdevs_discovered": 2, 00:09:14.585 "num_base_bdevs_operational": 3, 00:09:14.585 "base_bdevs_list": [ 00:09:14.585 { 00:09:14.585 "name": null, 00:09:14.585 "uuid": "9bbcaef6-7cd7-46bb-b8dd-ea73cdec59be", 00:09:14.585 "is_configured": false, 00:09:14.585 "data_offset": 0, 00:09:14.585 "data_size": 63488 00:09:14.585 }, 00:09:14.585 { 00:09:14.585 "name": "BaseBdev2", 00:09:14.585 "uuid": "964565a4-e459-46fe-ba37-19dbe94d8854", 00:09:14.585 "is_configured": true, 00:09:14.585 "data_offset": 2048, 00:09:14.585 "data_size": 63488 00:09:14.585 }, 00:09:14.585 { 00:09:14.585 "name": "BaseBdev3", 00:09:14.585 "uuid": "ae7fa802-c7b6-425d-ab83-1a6f6c86718a", 00:09:14.585 "is_configured": true, 00:09:14.585 "data_offset": 2048, 00:09:14.585 "data_size": 63488 00:09:14.585 } 00:09:14.585 ] 00:09:14.585 }' 00:09:14.585 14:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.585 14:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.152 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.152 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.152 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.152 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:15.152 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.152 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:15.152 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:15.152 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.152 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.152 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.152 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.152 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9bbcaef6-7cd7-46bb-b8dd-ea73cdec59be 00:09:15.152 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.152 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.152 [2024-12-09 14:41:53.177502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:15.152 [2024-12-09 14:41:53.177768] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:15.152 [2024-12-09 14:41:53.177786] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:15.152 [2024-12-09 14:41:53.178035] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:15.152 [2024-12-09 14:41:53.178208] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:15.152 [2024-12-09 14:41:53.178218] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:15.152 [2024-12-09 14:41:53.178345] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.152 NewBaseBdev 00:09:15.152 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.152 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:15.152 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:15.152 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:15.152 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:15.152 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:15.152 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:15.152 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:15.152 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.152 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.152 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.152 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:15.152 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.152 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.152 [ 00:09:15.152 { 00:09:15.152 "name": "NewBaseBdev", 00:09:15.152 "aliases": [ 00:09:15.152 "9bbcaef6-7cd7-46bb-b8dd-ea73cdec59be" 00:09:15.152 ], 00:09:15.152 "product_name": "Malloc disk", 00:09:15.152 "block_size": 512, 00:09:15.152 "num_blocks": 65536, 00:09:15.152 "uuid": "9bbcaef6-7cd7-46bb-b8dd-ea73cdec59be", 00:09:15.153 "assigned_rate_limits": { 00:09:15.153 "rw_ios_per_sec": 0, 00:09:15.153 "rw_mbytes_per_sec": 0, 00:09:15.153 "r_mbytes_per_sec": 0, 00:09:15.153 "w_mbytes_per_sec": 0 00:09:15.153 }, 00:09:15.153 "claimed": true, 00:09:15.153 "claim_type": "exclusive_write", 00:09:15.153 "zoned": false, 00:09:15.153 "supported_io_types": { 00:09:15.153 "read": true, 00:09:15.153 "write": true, 00:09:15.153 "unmap": true, 00:09:15.153 "flush": true, 00:09:15.153 "reset": true, 00:09:15.153 "nvme_admin": false, 00:09:15.153 "nvme_io": false, 00:09:15.153 "nvme_io_md": false, 00:09:15.153 "write_zeroes": true, 00:09:15.153 "zcopy": true, 00:09:15.153 "get_zone_info": false, 00:09:15.153 "zone_management": false, 00:09:15.153 "zone_append": false, 00:09:15.153 "compare": false, 00:09:15.153 "compare_and_write": false, 00:09:15.153 "abort": true, 00:09:15.153 "seek_hole": false, 00:09:15.153 "seek_data": false, 00:09:15.153 "copy": true, 00:09:15.153 "nvme_iov_md": false 00:09:15.153 }, 00:09:15.153 "memory_domains": [ 00:09:15.153 { 00:09:15.153 "dma_device_id": "system", 00:09:15.153 "dma_device_type": 1 00:09:15.153 }, 00:09:15.153 { 00:09:15.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.153 "dma_device_type": 2 00:09:15.153 } 00:09:15.153 ], 00:09:15.153 "driver_specific": {} 00:09:15.153 } 00:09:15.153 ] 00:09:15.153 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.153 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:15.153 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:15.153 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.153 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.153 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:15.153 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.153 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.153 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.153 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.153 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.153 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.153 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.153 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.153 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.153 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.153 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.153 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.153 "name": "Existed_Raid", 00:09:15.153 "uuid": "c9438543-d427-41a4-a5b3-1100fdbaa0db", 00:09:15.153 "strip_size_kb": 64, 00:09:15.153 "state": "online", 00:09:15.153 "raid_level": "concat", 00:09:15.153 "superblock": true, 00:09:15.153 "num_base_bdevs": 3, 00:09:15.153 "num_base_bdevs_discovered": 3, 00:09:15.153 "num_base_bdevs_operational": 3, 00:09:15.153 "base_bdevs_list": [ 00:09:15.153 { 00:09:15.153 "name": "NewBaseBdev", 00:09:15.153 "uuid": "9bbcaef6-7cd7-46bb-b8dd-ea73cdec59be", 00:09:15.153 "is_configured": true, 00:09:15.153 "data_offset": 2048, 00:09:15.153 "data_size": 63488 00:09:15.153 }, 00:09:15.153 { 00:09:15.153 "name": "BaseBdev2", 00:09:15.153 "uuid": "964565a4-e459-46fe-ba37-19dbe94d8854", 00:09:15.153 "is_configured": true, 00:09:15.153 "data_offset": 2048, 00:09:15.153 "data_size": 63488 00:09:15.153 }, 00:09:15.153 { 00:09:15.153 "name": "BaseBdev3", 00:09:15.153 "uuid": "ae7fa802-c7b6-425d-ab83-1a6f6c86718a", 00:09:15.153 "is_configured": true, 00:09:15.153 "data_offset": 2048, 00:09:15.153 "data_size": 63488 00:09:15.153 } 00:09:15.153 ] 00:09:15.153 }' 00:09:15.153 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.153 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.722 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:15.722 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:15.722 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:15.722 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:15.722 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:15.722 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:15.722 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:15.722 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:15.722 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.722 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.722 [2024-12-09 14:41:53.681005] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:15.722 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.722 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:15.722 "name": "Existed_Raid", 00:09:15.722 "aliases": [ 00:09:15.722 "c9438543-d427-41a4-a5b3-1100fdbaa0db" 00:09:15.722 ], 00:09:15.722 "product_name": "Raid Volume", 00:09:15.722 "block_size": 512, 00:09:15.722 "num_blocks": 190464, 00:09:15.722 "uuid": "c9438543-d427-41a4-a5b3-1100fdbaa0db", 00:09:15.722 "assigned_rate_limits": { 00:09:15.722 "rw_ios_per_sec": 0, 00:09:15.722 "rw_mbytes_per_sec": 0, 00:09:15.722 "r_mbytes_per_sec": 0, 00:09:15.722 "w_mbytes_per_sec": 0 00:09:15.722 }, 00:09:15.722 "claimed": false, 00:09:15.722 "zoned": false, 00:09:15.722 "supported_io_types": { 00:09:15.722 "read": true, 00:09:15.722 "write": true, 00:09:15.722 "unmap": true, 00:09:15.722 "flush": true, 00:09:15.722 "reset": true, 00:09:15.722 "nvme_admin": false, 00:09:15.722 "nvme_io": false, 00:09:15.722 "nvme_io_md": false, 00:09:15.722 "write_zeroes": true, 00:09:15.722 "zcopy": false, 00:09:15.722 "get_zone_info": false, 00:09:15.722 "zone_management": false, 00:09:15.722 "zone_append": false, 00:09:15.722 "compare": false, 00:09:15.722 "compare_and_write": false, 00:09:15.722 "abort": false, 00:09:15.722 "seek_hole": false, 00:09:15.722 "seek_data": false, 00:09:15.722 "copy": false, 00:09:15.722 "nvme_iov_md": false 00:09:15.722 }, 00:09:15.722 "memory_domains": [ 00:09:15.722 { 00:09:15.722 "dma_device_id": "system", 00:09:15.722 "dma_device_type": 1 00:09:15.722 }, 00:09:15.722 { 00:09:15.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.722 "dma_device_type": 2 00:09:15.722 }, 00:09:15.722 { 00:09:15.722 "dma_device_id": "system", 00:09:15.722 "dma_device_type": 1 00:09:15.722 }, 00:09:15.722 { 00:09:15.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.722 "dma_device_type": 2 00:09:15.722 }, 00:09:15.722 { 00:09:15.722 "dma_device_id": "system", 00:09:15.722 "dma_device_type": 1 00:09:15.722 }, 00:09:15.722 { 00:09:15.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.722 "dma_device_type": 2 00:09:15.722 } 00:09:15.722 ], 00:09:15.722 "driver_specific": { 00:09:15.722 "raid": { 00:09:15.722 "uuid": "c9438543-d427-41a4-a5b3-1100fdbaa0db", 00:09:15.722 "strip_size_kb": 64, 00:09:15.722 "state": "online", 00:09:15.722 "raid_level": "concat", 00:09:15.722 "superblock": true, 00:09:15.722 "num_base_bdevs": 3, 00:09:15.722 "num_base_bdevs_discovered": 3, 00:09:15.722 "num_base_bdevs_operational": 3, 00:09:15.722 "base_bdevs_list": [ 00:09:15.722 { 00:09:15.722 "name": "NewBaseBdev", 00:09:15.722 "uuid": "9bbcaef6-7cd7-46bb-b8dd-ea73cdec59be", 00:09:15.722 "is_configured": true, 00:09:15.722 "data_offset": 2048, 00:09:15.722 "data_size": 63488 00:09:15.722 }, 00:09:15.722 { 00:09:15.722 "name": "BaseBdev2", 00:09:15.722 "uuid": "964565a4-e459-46fe-ba37-19dbe94d8854", 00:09:15.722 "is_configured": true, 00:09:15.722 "data_offset": 2048, 00:09:15.722 "data_size": 63488 00:09:15.722 }, 00:09:15.722 { 00:09:15.722 "name": "BaseBdev3", 00:09:15.722 "uuid": "ae7fa802-c7b6-425d-ab83-1a6f6c86718a", 00:09:15.723 "is_configured": true, 00:09:15.723 "data_offset": 2048, 00:09:15.723 "data_size": 63488 00:09:15.723 } 00:09:15.723 ] 00:09:15.723 } 00:09:15.723 } 00:09:15.723 }' 00:09:15.723 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:15.723 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:15.723 BaseBdev2 00:09:15.723 BaseBdev3' 00:09:15.723 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.723 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:15.723 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.723 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:15.723 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.723 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.723 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.983 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.983 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.983 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.983 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.983 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:15.983 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.983 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.983 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.983 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.983 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.983 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.983 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.983 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.983 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:15.983 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.983 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.983 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.983 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.983 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.983 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:15.983 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.983 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.983 [2024-12-09 14:41:53.968224] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:15.983 [2024-12-09 14:41:53.968252] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:15.983 [2024-12-09 14:41:53.968341] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:15.983 [2024-12-09 14:41:53.968399] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:15.983 [2024-12-09 14:41:53.968411] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:15.983 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.983 14:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 67513 00:09:15.983 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 67513 ']' 00:09:15.983 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 67513 00:09:15.983 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:15.983 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:15.983 14:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67513 00:09:15.983 14:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:15.983 14:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:15.983 14:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67513' 00:09:15.983 killing process with pid 67513 00:09:15.983 14:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 67513 00:09:15.983 [2024-12-09 14:41:54.018682] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:15.983 14:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 67513 00:09:16.243 [2024-12-09 14:41:54.321190] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:17.623 14:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:17.623 ************************************ 00:09:17.623 END TEST raid_state_function_test_sb 00:09:17.623 ************************************ 00:09:17.623 00:09:17.623 real 0m10.626s 00:09:17.623 user 0m16.903s 00:09:17.623 sys 0m1.880s 00:09:17.623 14:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:17.623 14:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.623 14:41:55 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:17.623 14:41:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:17.623 14:41:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.623 14:41:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:17.623 ************************************ 00:09:17.623 START TEST raid_superblock_test 00:09:17.623 ************************************ 00:09:17.623 14:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:09:17.623 14:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:17.623 14:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:17.623 14:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:17.623 14:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:17.623 14:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:17.623 14:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:17.623 14:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:17.623 14:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:17.623 14:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:17.623 14:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:17.623 14:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:17.623 14:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:17.623 14:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:17.623 14:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:17.623 14:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:17.623 14:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:17.623 14:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68133 00:09:17.623 14:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:17.623 14:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68133 00:09:17.623 14:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68133 ']' 00:09:17.623 14:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.623 14:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.623 14:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.623 14:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.623 14:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.623 [2024-12-09 14:41:55.600327] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:09:17.623 [2024-12-09 14:41:55.600442] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68133 ] 00:09:17.882 [2024-12-09 14:41:55.774988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.882 [2024-12-09 14:41:55.890289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.141 [2024-12-09 14:41:56.085137] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:18.141 [2024-12-09 14:41:56.085186] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:18.401 14:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:18.401 14:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:18.401 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:18.401 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:18.401 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:18.401 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:18.401 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:18.401 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:18.401 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:18.401 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:18.401 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:18.401 14:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.401 14:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.401 malloc1 00:09:18.401 14:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.401 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:18.401 14:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.401 14:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.401 [2024-12-09 14:41:56.485222] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:18.401 [2024-12-09 14:41:56.485323] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.401 [2024-12-09 14:41:56.485363] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:18.401 [2024-12-09 14:41:56.485391] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.401 [2024-12-09 14:41:56.487507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.401 [2024-12-09 14:41:56.487588] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:18.401 pt1 00:09:18.401 14:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.401 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:18.401 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:18.401 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:18.401 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:18.401 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:18.401 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:18.401 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:18.401 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:18.401 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:18.401 14:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.401 14:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.660 malloc2 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.661 [2024-12-09 14:41:56.535950] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:18.661 [2024-12-09 14:41:56.536006] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.661 [2024-12-09 14:41:56.536031] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:18.661 [2024-12-09 14:41:56.536041] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.661 [2024-12-09 14:41:56.538018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.661 [2024-12-09 14:41:56.538052] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:18.661 pt2 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.661 malloc3 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.661 [2024-12-09 14:41:56.601875] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:18.661 [2024-12-09 14:41:56.601974] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.661 [2024-12-09 14:41:56.602016] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:18.661 [2024-12-09 14:41:56.602045] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.661 [2024-12-09 14:41:56.604326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.661 [2024-12-09 14:41:56.604414] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:18.661 pt3 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.661 [2024-12-09 14:41:56.613900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:18.661 [2024-12-09 14:41:56.615865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:18.661 [2024-12-09 14:41:56.615975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:18.661 [2024-12-09 14:41:56.616169] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:18.661 [2024-12-09 14:41:56.616219] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:18.661 [2024-12-09 14:41:56.616499] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:18.661 [2024-12-09 14:41:56.616740] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:18.661 [2024-12-09 14:41:56.616783] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:18.661 [2024-12-09 14:41:56.616982] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.661 "name": "raid_bdev1", 00:09:18.661 "uuid": "b4ec2c61-746e-421e-90cf-e3b12da015b0", 00:09:18.661 "strip_size_kb": 64, 00:09:18.661 "state": "online", 00:09:18.661 "raid_level": "concat", 00:09:18.661 "superblock": true, 00:09:18.661 "num_base_bdevs": 3, 00:09:18.661 "num_base_bdevs_discovered": 3, 00:09:18.661 "num_base_bdevs_operational": 3, 00:09:18.661 "base_bdevs_list": [ 00:09:18.661 { 00:09:18.661 "name": "pt1", 00:09:18.661 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:18.661 "is_configured": true, 00:09:18.661 "data_offset": 2048, 00:09:18.661 "data_size": 63488 00:09:18.661 }, 00:09:18.661 { 00:09:18.661 "name": "pt2", 00:09:18.661 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:18.661 "is_configured": true, 00:09:18.661 "data_offset": 2048, 00:09:18.661 "data_size": 63488 00:09:18.661 }, 00:09:18.661 { 00:09:18.661 "name": "pt3", 00:09:18.661 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:18.661 "is_configured": true, 00:09:18.661 "data_offset": 2048, 00:09:18.661 "data_size": 63488 00:09:18.661 } 00:09:18.661 ] 00:09:18.661 }' 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.661 14:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.230 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:19.230 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:19.230 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:19.230 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:19.230 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:19.230 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:19.230 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:19.230 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:19.230 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.230 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.231 [2024-12-09 14:41:57.057613] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:19.231 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.231 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:19.231 "name": "raid_bdev1", 00:09:19.231 "aliases": [ 00:09:19.231 "b4ec2c61-746e-421e-90cf-e3b12da015b0" 00:09:19.231 ], 00:09:19.231 "product_name": "Raid Volume", 00:09:19.231 "block_size": 512, 00:09:19.231 "num_blocks": 190464, 00:09:19.231 "uuid": "b4ec2c61-746e-421e-90cf-e3b12da015b0", 00:09:19.231 "assigned_rate_limits": { 00:09:19.231 "rw_ios_per_sec": 0, 00:09:19.231 "rw_mbytes_per_sec": 0, 00:09:19.231 "r_mbytes_per_sec": 0, 00:09:19.231 "w_mbytes_per_sec": 0 00:09:19.231 }, 00:09:19.231 "claimed": false, 00:09:19.231 "zoned": false, 00:09:19.231 "supported_io_types": { 00:09:19.231 "read": true, 00:09:19.231 "write": true, 00:09:19.231 "unmap": true, 00:09:19.231 "flush": true, 00:09:19.231 "reset": true, 00:09:19.231 "nvme_admin": false, 00:09:19.231 "nvme_io": false, 00:09:19.231 "nvme_io_md": false, 00:09:19.231 "write_zeroes": true, 00:09:19.231 "zcopy": false, 00:09:19.231 "get_zone_info": false, 00:09:19.231 "zone_management": false, 00:09:19.231 "zone_append": false, 00:09:19.231 "compare": false, 00:09:19.231 "compare_and_write": false, 00:09:19.231 "abort": false, 00:09:19.231 "seek_hole": false, 00:09:19.231 "seek_data": false, 00:09:19.231 "copy": false, 00:09:19.231 "nvme_iov_md": false 00:09:19.231 }, 00:09:19.231 "memory_domains": [ 00:09:19.231 { 00:09:19.231 "dma_device_id": "system", 00:09:19.231 "dma_device_type": 1 00:09:19.231 }, 00:09:19.231 { 00:09:19.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.231 "dma_device_type": 2 00:09:19.231 }, 00:09:19.231 { 00:09:19.231 "dma_device_id": "system", 00:09:19.231 "dma_device_type": 1 00:09:19.231 }, 00:09:19.231 { 00:09:19.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.231 "dma_device_type": 2 00:09:19.231 }, 00:09:19.231 { 00:09:19.231 "dma_device_id": "system", 00:09:19.231 "dma_device_type": 1 00:09:19.231 }, 00:09:19.231 { 00:09:19.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.231 "dma_device_type": 2 00:09:19.231 } 00:09:19.231 ], 00:09:19.231 "driver_specific": { 00:09:19.231 "raid": { 00:09:19.231 "uuid": "b4ec2c61-746e-421e-90cf-e3b12da015b0", 00:09:19.231 "strip_size_kb": 64, 00:09:19.231 "state": "online", 00:09:19.231 "raid_level": "concat", 00:09:19.231 "superblock": true, 00:09:19.231 "num_base_bdevs": 3, 00:09:19.231 "num_base_bdevs_discovered": 3, 00:09:19.231 "num_base_bdevs_operational": 3, 00:09:19.231 "base_bdevs_list": [ 00:09:19.231 { 00:09:19.231 "name": "pt1", 00:09:19.231 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:19.231 "is_configured": true, 00:09:19.231 "data_offset": 2048, 00:09:19.231 "data_size": 63488 00:09:19.231 }, 00:09:19.231 { 00:09:19.231 "name": "pt2", 00:09:19.231 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:19.231 "is_configured": true, 00:09:19.231 "data_offset": 2048, 00:09:19.231 "data_size": 63488 00:09:19.231 }, 00:09:19.231 { 00:09:19.231 "name": "pt3", 00:09:19.231 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:19.231 "is_configured": true, 00:09:19.231 "data_offset": 2048, 00:09:19.231 "data_size": 63488 00:09:19.231 } 00:09:19.231 ] 00:09:19.231 } 00:09:19.231 } 00:09:19.231 }' 00:09:19.231 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:19.231 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:19.231 pt2 00:09:19.231 pt3' 00:09:19.231 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.231 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:19.231 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.231 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:19.231 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.231 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.231 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.231 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.231 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.231 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.231 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.231 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:19.231 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.231 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.231 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.231 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.231 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.231 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.231 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.231 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:19.231 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.231 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.231 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.231 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.231 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.231 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.231 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:19.231 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:19.231 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.231 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.231 [2024-12-09 14:41:57.337114] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b4ec2c61-746e-421e-90cf-e3b12da015b0 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b4ec2c61-746e-421e-90cf-e3b12da015b0 ']' 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.501 [2024-12-09 14:41:57.384646] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:19.501 [2024-12-09 14:41:57.384682] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:19.501 [2024-12-09 14:41:57.384782] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:19.501 [2024-12-09 14:41:57.384851] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:19.501 [2024-12-09 14:41:57.384874] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.501 [2024-12-09 14:41:57.532401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:19.501 [2024-12-09 14:41:57.534468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:19.501 [2024-12-09 14:41:57.534526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:19.501 [2024-12-09 14:41:57.534598] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:19.501 [2024-12-09 14:41:57.534696] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:19.501 [2024-12-09 14:41:57.534722] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:19.501 [2024-12-09 14:41:57.534746] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:19.501 [2024-12-09 14:41:57.534760] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:19.501 request: 00:09:19.501 { 00:09:19.501 "name": "raid_bdev1", 00:09:19.501 "raid_level": "concat", 00:09:19.501 "base_bdevs": [ 00:09:19.501 "malloc1", 00:09:19.501 "malloc2", 00:09:19.501 "malloc3" 00:09:19.501 ], 00:09:19.501 "strip_size_kb": 64, 00:09:19.501 "superblock": false, 00:09:19.501 "method": "bdev_raid_create", 00:09:19.501 "req_id": 1 00:09:19.501 } 00:09:19.501 Got JSON-RPC error response 00:09:19.501 response: 00:09:19.501 { 00:09:19.501 "code": -17, 00:09:19.501 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:19.501 } 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.501 [2024-12-09 14:41:57.596248] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:19.501 [2024-12-09 14:41:57.596379] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.501 [2024-12-09 14:41:57.596420] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:19.501 [2024-12-09 14:41:57.596449] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.501 [2024-12-09 14:41:57.598838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.501 [2024-12-09 14:41:57.598937] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:19.501 [2024-12-09 14:41:57.599066] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:19.501 [2024-12-09 14:41:57.599160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:19.501 pt1 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:19.501 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.763 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.763 "name": "raid_bdev1", 00:09:19.763 "uuid": "b4ec2c61-746e-421e-90cf-e3b12da015b0", 00:09:19.763 "strip_size_kb": 64, 00:09:19.763 "state": "configuring", 00:09:19.763 "raid_level": "concat", 00:09:19.763 "superblock": true, 00:09:19.763 "num_base_bdevs": 3, 00:09:19.763 "num_base_bdevs_discovered": 1, 00:09:19.763 "num_base_bdevs_operational": 3, 00:09:19.763 "base_bdevs_list": [ 00:09:19.763 { 00:09:19.763 "name": "pt1", 00:09:19.763 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:19.763 "is_configured": true, 00:09:19.763 "data_offset": 2048, 00:09:19.763 "data_size": 63488 00:09:19.763 }, 00:09:19.763 { 00:09:19.763 "name": null, 00:09:19.763 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:19.763 "is_configured": false, 00:09:19.763 "data_offset": 2048, 00:09:19.763 "data_size": 63488 00:09:19.763 }, 00:09:19.763 { 00:09:19.763 "name": null, 00:09:19.764 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:19.764 "is_configured": false, 00:09:19.764 "data_offset": 2048, 00:09:19.764 "data_size": 63488 00:09:19.764 } 00:09:19.764 ] 00:09:19.764 }' 00:09:19.764 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.764 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.024 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:20.024 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:20.024 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.024 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.024 [2024-12-09 14:41:57.971654] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:20.024 [2024-12-09 14:41:57.971731] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.024 [2024-12-09 14:41:57.971763] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:20.024 [2024-12-09 14:41:57.971773] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.024 [2024-12-09 14:41:57.972265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.024 [2024-12-09 14:41:57.972284] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:20.024 [2024-12-09 14:41:57.972378] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:20.024 [2024-12-09 14:41:57.972410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:20.024 pt2 00:09:20.024 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.024 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:20.024 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.024 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.024 [2024-12-09 14:41:57.983633] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:20.024 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.024 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:20.024 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:20.024 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.024 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:20.024 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.024 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.024 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.024 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.024 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.024 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.024 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.024 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.024 14:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.024 14:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:20.024 14:41:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.024 14:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.024 "name": "raid_bdev1", 00:09:20.024 "uuid": "b4ec2c61-746e-421e-90cf-e3b12da015b0", 00:09:20.024 "strip_size_kb": 64, 00:09:20.024 "state": "configuring", 00:09:20.024 "raid_level": "concat", 00:09:20.024 "superblock": true, 00:09:20.024 "num_base_bdevs": 3, 00:09:20.024 "num_base_bdevs_discovered": 1, 00:09:20.024 "num_base_bdevs_operational": 3, 00:09:20.024 "base_bdevs_list": [ 00:09:20.024 { 00:09:20.024 "name": "pt1", 00:09:20.024 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:20.024 "is_configured": true, 00:09:20.024 "data_offset": 2048, 00:09:20.024 "data_size": 63488 00:09:20.024 }, 00:09:20.024 { 00:09:20.024 "name": null, 00:09:20.024 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:20.024 "is_configured": false, 00:09:20.024 "data_offset": 0, 00:09:20.024 "data_size": 63488 00:09:20.024 }, 00:09:20.024 { 00:09:20.024 "name": null, 00:09:20.024 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:20.024 "is_configured": false, 00:09:20.024 "data_offset": 2048, 00:09:20.024 "data_size": 63488 00:09:20.024 } 00:09:20.024 ] 00:09:20.024 }' 00:09:20.024 14:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.024 14:41:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.593 14:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:20.593 14:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:20.593 14:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:20.593 14:41:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.593 14:41:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.593 [2024-12-09 14:41:58.442831] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:20.593 [2024-12-09 14:41:58.442950] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.593 [2024-12-09 14:41:58.443003] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:20.593 [2024-12-09 14:41:58.443037] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.593 [2024-12-09 14:41:58.443533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.593 [2024-12-09 14:41:58.443612] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:20.593 [2024-12-09 14:41:58.443739] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:20.593 [2024-12-09 14:41:58.443796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:20.593 pt2 00:09:20.593 14:41:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.593 14:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:20.593 14:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:20.593 14:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:20.593 14:41:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.593 14:41:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.594 [2024-12-09 14:41:58.454760] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:20.594 [2024-12-09 14:41:58.454841] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.594 [2024-12-09 14:41:58.454872] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:20.594 [2024-12-09 14:41:58.454922] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.594 [2024-12-09 14:41:58.455322] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.594 [2024-12-09 14:41:58.455387] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:20.594 [2024-12-09 14:41:58.455475] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:20.594 [2024-12-09 14:41:58.455526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:20.594 [2024-12-09 14:41:58.455673] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:20.594 [2024-12-09 14:41:58.455718] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:20.594 [2024-12-09 14:41:58.456001] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:20.594 [2024-12-09 14:41:58.456191] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:20.594 [2024-12-09 14:41:58.456231] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:20.594 [2024-12-09 14:41:58.456411] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.594 pt3 00:09:20.594 14:41:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.594 14:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:20.594 14:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:20.594 14:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:20.594 14:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:20.594 14:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:20.594 14:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:20.594 14:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.594 14:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.594 14:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.594 14:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.594 14:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.594 14:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.594 14:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:20.594 14:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.594 14:41:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.594 14:41:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.594 14:41:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.594 14:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.594 "name": "raid_bdev1", 00:09:20.594 "uuid": "b4ec2c61-746e-421e-90cf-e3b12da015b0", 00:09:20.594 "strip_size_kb": 64, 00:09:20.594 "state": "online", 00:09:20.594 "raid_level": "concat", 00:09:20.594 "superblock": true, 00:09:20.594 "num_base_bdevs": 3, 00:09:20.594 "num_base_bdevs_discovered": 3, 00:09:20.594 "num_base_bdevs_operational": 3, 00:09:20.594 "base_bdevs_list": [ 00:09:20.594 { 00:09:20.594 "name": "pt1", 00:09:20.594 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:20.594 "is_configured": true, 00:09:20.594 "data_offset": 2048, 00:09:20.594 "data_size": 63488 00:09:20.594 }, 00:09:20.594 { 00:09:20.594 "name": "pt2", 00:09:20.594 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:20.594 "is_configured": true, 00:09:20.594 "data_offset": 2048, 00:09:20.594 "data_size": 63488 00:09:20.594 }, 00:09:20.594 { 00:09:20.594 "name": "pt3", 00:09:20.594 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:20.594 "is_configured": true, 00:09:20.594 "data_offset": 2048, 00:09:20.594 "data_size": 63488 00:09:20.594 } 00:09:20.594 ] 00:09:20.594 }' 00:09:20.594 14:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.594 14:41:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.853 14:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:20.853 14:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:20.853 14:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:20.853 14:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:20.853 14:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:20.853 14:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:20.853 14:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:20.853 14:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:20.854 14:41:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.854 14:41:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.854 [2024-12-09 14:41:58.922323] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:20.854 14:41:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.854 14:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:20.854 "name": "raid_bdev1", 00:09:20.854 "aliases": [ 00:09:20.854 "b4ec2c61-746e-421e-90cf-e3b12da015b0" 00:09:20.854 ], 00:09:20.854 "product_name": "Raid Volume", 00:09:20.854 "block_size": 512, 00:09:20.854 "num_blocks": 190464, 00:09:20.854 "uuid": "b4ec2c61-746e-421e-90cf-e3b12da015b0", 00:09:20.854 "assigned_rate_limits": { 00:09:20.854 "rw_ios_per_sec": 0, 00:09:20.854 "rw_mbytes_per_sec": 0, 00:09:20.854 "r_mbytes_per_sec": 0, 00:09:20.854 "w_mbytes_per_sec": 0 00:09:20.854 }, 00:09:20.854 "claimed": false, 00:09:20.854 "zoned": false, 00:09:20.854 "supported_io_types": { 00:09:20.854 "read": true, 00:09:20.854 "write": true, 00:09:20.854 "unmap": true, 00:09:20.854 "flush": true, 00:09:20.854 "reset": true, 00:09:20.854 "nvme_admin": false, 00:09:20.854 "nvme_io": false, 00:09:20.854 "nvme_io_md": false, 00:09:20.854 "write_zeroes": true, 00:09:20.854 "zcopy": false, 00:09:20.854 "get_zone_info": false, 00:09:20.854 "zone_management": false, 00:09:20.854 "zone_append": false, 00:09:20.854 "compare": false, 00:09:20.854 "compare_and_write": false, 00:09:20.854 "abort": false, 00:09:20.854 "seek_hole": false, 00:09:20.854 "seek_data": false, 00:09:20.854 "copy": false, 00:09:20.854 "nvme_iov_md": false 00:09:20.854 }, 00:09:20.854 "memory_domains": [ 00:09:20.854 { 00:09:20.854 "dma_device_id": "system", 00:09:20.854 "dma_device_type": 1 00:09:20.854 }, 00:09:20.854 { 00:09:20.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.854 "dma_device_type": 2 00:09:20.854 }, 00:09:20.854 { 00:09:20.854 "dma_device_id": "system", 00:09:20.854 "dma_device_type": 1 00:09:20.854 }, 00:09:20.854 { 00:09:20.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.854 "dma_device_type": 2 00:09:20.854 }, 00:09:20.854 { 00:09:20.854 "dma_device_id": "system", 00:09:20.854 "dma_device_type": 1 00:09:20.854 }, 00:09:20.854 { 00:09:20.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.854 "dma_device_type": 2 00:09:20.854 } 00:09:20.854 ], 00:09:20.854 "driver_specific": { 00:09:20.854 "raid": { 00:09:20.854 "uuid": "b4ec2c61-746e-421e-90cf-e3b12da015b0", 00:09:20.854 "strip_size_kb": 64, 00:09:20.854 "state": "online", 00:09:20.854 "raid_level": "concat", 00:09:20.854 "superblock": true, 00:09:20.854 "num_base_bdevs": 3, 00:09:20.854 "num_base_bdevs_discovered": 3, 00:09:20.854 "num_base_bdevs_operational": 3, 00:09:20.854 "base_bdevs_list": [ 00:09:20.854 { 00:09:20.854 "name": "pt1", 00:09:20.854 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:20.854 "is_configured": true, 00:09:20.854 "data_offset": 2048, 00:09:20.854 "data_size": 63488 00:09:20.854 }, 00:09:20.854 { 00:09:20.854 "name": "pt2", 00:09:20.854 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:20.854 "is_configured": true, 00:09:20.854 "data_offset": 2048, 00:09:20.854 "data_size": 63488 00:09:20.854 }, 00:09:20.854 { 00:09:20.854 "name": "pt3", 00:09:20.854 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:20.854 "is_configured": true, 00:09:20.854 "data_offset": 2048, 00:09:20.854 "data_size": 63488 00:09:20.854 } 00:09:20.854 ] 00:09:20.854 } 00:09:20.854 } 00:09:20.854 }' 00:09:20.854 14:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:21.114 14:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:21.114 pt2 00:09:21.114 pt3' 00:09:21.114 14:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.114 14:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:21.114 14:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.114 14:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:21.114 14:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.114 14:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.114 14:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.114 14:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.114 14:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.114 14:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.114 14:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.114 14:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.114 14:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:21.114 14:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.114 14:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.114 14:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.114 14:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.114 14:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.114 14:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.114 14:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:21.114 14:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.114 14:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.114 14:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.114 14:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.114 14:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.114 14:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.114 14:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:21.114 14:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:21.114 14:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.114 14:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.114 [2024-12-09 14:41:59.201834] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:21.114 14:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.374 14:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b4ec2c61-746e-421e-90cf-e3b12da015b0 '!=' b4ec2c61-746e-421e-90cf-e3b12da015b0 ']' 00:09:21.374 14:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:21.374 14:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:21.374 14:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:21.374 14:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68133 00:09:21.374 14:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68133 ']' 00:09:21.374 14:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68133 00:09:21.374 14:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:21.374 14:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:21.374 14:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68133 00:09:21.374 14:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:21.374 14:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:21.374 14:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68133' 00:09:21.374 killing process with pid 68133 00:09:21.374 14:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68133 00:09:21.374 [2024-12-09 14:41:59.268975] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:21.374 [2024-12-09 14:41:59.269071] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:21.374 [2024-12-09 14:41:59.269135] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:21.374 14:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68133 00:09:21.374 [2024-12-09 14:41:59.269147] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:21.634 [2024-12-09 14:41:59.578408] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:23.015 14:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:23.015 00:09:23.015 real 0m5.212s 00:09:23.015 user 0m7.461s 00:09:23.015 sys 0m0.865s 00:09:23.015 14:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:23.015 14:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.015 ************************************ 00:09:23.015 END TEST raid_superblock_test 00:09:23.015 ************************************ 00:09:23.015 14:42:00 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:23.015 14:42:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:23.015 14:42:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:23.015 14:42:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:23.015 ************************************ 00:09:23.015 START TEST raid_read_error_test 00:09:23.015 ************************************ 00:09:23.015 14:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:09:23.015 14:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:23.015 14:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:23.015 14:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:23.015 14:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:23.015 14:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:23.015 14:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:23.015 14:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:23.015 14:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:23.015 14:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:23.015 14:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:23.015 14:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:23.015 14:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:23.015 14:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:23.015 14:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:23.015 14:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:23.015 14:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:23.015 14:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:23.015 14:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:23.015 14:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:23.015 14:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:23.015 14:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:23.016 14:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:23.016 14:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:23.016 14:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:23.016 14:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:23.016 14:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.09fnQp5yrN 00:09:23.016 14:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=68386 00:09:23.016 14:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:23.016 14:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 68386 00:09:23.016 14:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 68386 ']' 00:09:23.016 14:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.016 14:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:23.016 14:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.016 14:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:23.016 14:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.016 [2024-12-09 14:42:00.888781] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:09:23.016 [2024-12-09 14:42:00.888990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68386 ] 00:09:23.016 [2024-12-09 14:42:01.064105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.275 [2024-12-09 14:42:01.181449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.275 [2024-12-09 14:42:01.386989] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:23.276 [2024-12-09 14:42:01.387128] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:23.842 14:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:23.842 14:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:23.842 14:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:23.842 14:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:23.842 14:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.842 14:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.842 BaseBdev1_malloc 00:09:23.842 14:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.842 14:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:23.842 14:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.842 14:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.842 true 00:09:23.842 14:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.842 14:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:23.842 14:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.842 14:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.842 [2024-12-09 14:42:01.790583] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:23.842 [2024-12-09 14:42:01.790645] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.842 [2024-12-09 14:42:01.790666] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:23.842 [2024-12-09 14:42:01.790677] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.842 [2024-12-09 14:42:01.792893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.842 [2024-12-09 14:42:01.792935] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:23.842 BaseBdev1 00:09:23.842 14:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.842 14:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:23.842 14:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:23.842 14:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.842 14:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.842 BaseBdev2_malloc 00:09:23.842 14:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.842 14:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:23.842 14:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.842 14:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.842 true 00:09:23.843 14:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.843 14:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:23.843 14:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.843 14:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.843 [2024-12-09 14:42:01.856861] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:23.843 [2024-12-09 14:42:01.856958] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.843 [2024-12-09 14:42:01.856978] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:23.843 [2024-12-09 14:42:01.856989] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.843 [2024-12-09 14:42:01.859110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.843 [2024-12-09 14:42:01.859151] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:23.843 BaseBdev2 00:09:23.843 14:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.843 14:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:23.843 14:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:23.843 14:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.843 14:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.843 BaseBdev3_malloc 00:09:23.843 14:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.843 14:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:23.843 14:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.843 14:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.843 true 00:09:23.843 14:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.843 14:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:23.843 14:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.843 14:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.843 [2024-12-09 14:42:01.934850] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:23.843 [2024-12-09 14:42:01.934947] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.843 [2024-12-09 14:42:01.934969] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:23.843 [2024-12-09 14:42:01.934981] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.843 [2024-12-09 14:42:01.937092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.843 [2024-12-09 14:42:01.937134] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:23.843 BaseBdev3 00:09:23.843 14:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.843 14:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:23.843 14:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.843 14:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.843 [2024-12-09 14:42:01.946913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:23.843 [2024-12-09 14:42:01.948808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:23.843 [2024-12-09 14:42:01.948883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:23.843 [2024-12-09 14:42:01.949086] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:23.843 [2024-12-09 14:42:01.949099] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:23.843 [2024-12-09 14:42:01.949348] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:23.843 [2024-12-09 14:42:01.949510] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:23.843 [2024-12-09 14:42:01.949528] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:23.843 [2024-12-09 14:42:01.949688] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:23.843 14:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.843 14:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:23.843 14:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:23.843 14:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:23.843 14:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:23.843 14:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.843 14:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.843 14:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.843 14:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.843 14:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.843 14:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.843 14:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.843 14:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:23.843 14:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.843 14:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.102 14:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.102 14:42:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.102 "name": "raid_bdev1", 00:09:24.102 "uuid": "b28f44de-3829-4cbf-8b2c-9fef91ba488f", 00:09:24.102 "strip_size_kb": 64, 00:09:24.102 "state": "online", 00:09:24.102 "raid_level": "concat", 00:09:24.102 "superblock": true, 00:09:24.102 "num_base_bdevs": 3, 00:09:24.102 "num_base_bdevs_discovered": 3, 00:09:24.102 "num_base_bdevs_operational": 3, 00:09:24.102 "base_bdevs_list": [ 00:09:24.102 { 00:09:24.102 "name": "BaseBdev1", 00:09:24.102 "uuid": "2dbe885b-e534-5000-a6e3-cce1a6d8df5d", 00:09:24.102 "is_configured": true, 00:09:24.102 "data_offset": 2048, 00:09:24.102 "data_size": 63488 00:09:24.102 }, 00:09:24.102 { 00:09:24.102 "name": "BaseBdev2", 00:09:24.102 "uuid": "d8fb8337-21dd-5bda-b315-9c5def9ecf53", 00:09:24.102 "is_configured": true, 00:09:24.102 "data_offset": 2048, 00:09:24.102 "data_size": 63488 00:09:24.102 }, 00:09:24.102 { 00:09:24.102 "name": "BaseBdev3", 00:09:24.102 "uuid": "d6defb57-803e-592e-8eb2-1f0dfe2dec66", 00:09:24.102 "is_configured": true, 00:09:24.102 "data_offset": 2048, 00:09:24.102 "data_size": 63488 00:09:24.102 } 00:09:24.102 ] 00:09:24.102 }' 00:09:24.102 14:42:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.102 14:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.361 14:42:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:24.361 14:42:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:24.620 [2024-12-09 14:42:02.527471] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:25.655 14:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:25.655 14:42:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.655 14:42:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.655 14:42:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.655 14:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:25.655 14:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:25.655 14:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:25.655 14:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:25.655 14:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:25.655 14:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:25.655 14:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:25.655 14:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.655 14:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.655 14:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.655 14:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.655 14:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.655 14:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.655 14:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.655 14:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:25.655 14:42:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.655 14:42:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.655 14:42:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.655 14:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.655 "name": "raid_bdev1", 00:09:25.655 "uuid": "b28f44de-3829-4cbf-8b2c-9fef91ba488f", 00:09:25.655 "strip_size_kb": 64, 00:09:25.655 "state": "online", 00:09:25.655 "raid_level": "concat", 00:09:25.655 "superblock": true, 00:09:25.655 "num_base_bdevs": 3, 00:09:25.655 "num_base_bdevs_discovered": 3, 00:09:25.655 "num_base_bdevs_operational": 3, 00:09:25.655 "base_bdevs_list": [ 00:09:25.655 { 00:09:25.655 "name": "BaseBdev1", 00:09:25.655 "uuid": "2dbe885b-e534-5000-a6e3-cce1a6d8df5d", 00:09:25.655 "is_configured": true, 00:09:25.655 "data_offset": 2048, 00:09:25.655 "data_size": 63488 00:09:25.655 }, 00:09:25.655 { 00:09:25.655 "name": "BaseBdev2", 00:09:25.655 "uuid": "d8fb8337-21dd-5bda-b315-9c5def9ecf53", 00:09:25.655 "is_configured": true, 00:09:25.655 "data_offset": 2048, 00:09:25.655 "data_size": 63488 00:09:25.655 }, 00:09:25.655 { 00:09:25.655 "name": "BaseBdev3", 00:09:25.655 "uuid": "d6defb57-803e-592e-8eb2-1f0dfe2dec66", 00:09:25.655 "is_configured": true, 00:09:25.655 "data_offset": 2048, 00:09:25.655 "data_size": 63488 00:09:25.655 } 00:09:25.655 ] 00:09:25.655 }' 00:09:25.655 14:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.655 14:42:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.914 14:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:25.914 14:42:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.914 14:42:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.914 [2024-12-09 14:42:03.887962] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:25.914 [2024-12-09 14:42:03.888059] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:25.914 [2024-12-09 14:42:03.890996] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:25.914 [2024-12-09 14:42:03.891111] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:25.914 [2024-12-09 14:42:03.891176] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:25.914 [2024-12-09 14:42:03.891226] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:25.914 { 00:09:25.914 "results": [ 00:09:25.914 { 00:09:25.914 "job": "raid_bdev1", 00:09:25.914 "core_mask": "0x1", 00:09:25.914 "workload": "randrw", 00:09:25.914 "percentage": 50, 00:09:25.914 "status": "finished", 00:09:25.914 "queue_depth": 1, 00:09:25.914 "io_size": 131072, 00:09:25.914 "runtime": 1.361232, 00:09:25.914 "iops": 14264.284119092117, 00:09:25.914 "mibps": 1783.0355148865146, 00:09:25.914 "io_failed": 1, 00:09:25.914 "io_timeout": 0, 00:09:25.914 "avg_latency_us": 97.08521827989247, 00:09:25.914 "min_latency_us": 27.165065502183406, 00:09:25.914 "max_latency_us": 1430.9170305676855 00:09:25.914 } 00:09:25.914 ], 00:09:25.914 "core_count": 1 00:09:25.914 } 00:09:25.914 14:42:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.914 14:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 68386 00:09:25.914 14:42:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 68386 ']' 00:09:25.914 14:42:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 68386 00:09:25.914 14:42:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:25.914 14:42:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:25.914 14:42:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68386 00:09:25.914 killing process with pid 68386 00:09:25.914 14:42:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:25.914 14:42:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:25.914 14:42:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68386' 00:09:25.914 14:42:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 68386 00:09:25.914 [2024-12-09 14:42:03.920830] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:25.914 14:42:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 68386 00:09:26.173 [2024-12-09 14:42:04.160525] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:27.553 14:42:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:27.553 14:42:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.09fnQp5yrN 00:09:27.553 14:42:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:27.553 14:42:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:27.553 14:42:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:27.553 14:42:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:27.553 14:42:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:27.553 14:42:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:27.553 00:09:27.553 real 0m4.582s 00:09:27.553 user 0m5.486s 00:09:27.553 sys 0m0.533s 00:09:27.553 14:42:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.553 14:42:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.553 ************************************ 00:09:27.553 END TEST raid_read_error_test 00:09:27.553 ************************************ 00:09:27.553 14:42:05 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:27.553 14:42:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:27.553 14:42:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.553 14:42:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:27.553 ************************************ 00:09:27.553 START TEST raid_write_error_test 00:09:27.553 ************************************ 00:09:27.553 14:42:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:09:27.553 14:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:27.553 14:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:27.553 14:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:27.553 14:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:27.553 14:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:27.553 14:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:27.553 14:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:27.553 14:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:27.553 14:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:27.553 14:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:27.553 14:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:27.553 14:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:27.553 14:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:27.553 14:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:27.553 14:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:27.553 14:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:27.553 14:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:27.553 14:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:27.553 14:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:27.553 14:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:27.553 14:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:27.553 14:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:27.553 14:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:27.554 14:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:27.554 14:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:27.554 14:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.UkeWDZlteP 00:09:27.554 14:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=68532 00:09:27.554 14:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:27.554 14:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 68532 00:09:27.554 14:42:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 68532 ']' 00:09:27.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.554 14:42:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.554 14:42:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:27.554 14:42:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.554 14:42:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:27.554 14:42:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.554 [2024-12-09 14:42:05.543861] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:09:27.554 [2024-12-09 14:42:05.543994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68532 ] 00:09:27.814 [2024-12-09 14:42:05.717994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.814 [2024-12-09 14:42:05.837803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.073 [2024-12-09 14:42:06.053545] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:28.073 [2024-12-09 14:42:06.053610] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:28.355 14:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:28.355 14:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:28.355 14:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:28.355 14:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:28.355 14:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.355 14:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.355 BaseBdev1_malloc 00:09:28.355 14:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.355 14:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:28.355 14:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.355 14:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.355 true 00:09:28.355 14:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.355 14:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:28.355 14:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.355 14:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.355 [2024-12-09 14:42:06.441681] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:28.355 [2024-12-09 14:42:06.441736] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.355 [2024-12-09 14:42:06.441757] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:28.355 [2024-12-09 14:42:06.441768] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.355 [2024-12-09 14:42:06.443831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.355 [2024-12-09 14:42:06.443940] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:28.355 BaseBdev1 00:09:28.355 14:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.355 14:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:28.355 14:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:28.355 14:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.355 14:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.615 BaseBdev2_malloc 00:09:28.615 14:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.615 14:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:28.615 14:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.615 14:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.615 true 00:09:28.615 14:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.615 14:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:28.615 14:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.615 14:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.615 [2024-12-09 14:42:06.508187] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:28.615 [2024-12-09 14:42:06.508243] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.615 [2024-12-09 14:42:06.508260] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:28.615 [2024-12-09 14:42:06.508270] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.615 [2024-12-09 14:42:06.510238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.615 [2024-12-09 14:42:06.510279] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:28.615 BaseBdev2 00:09:28.615 14:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.615 14:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:28.615 14:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:28.615 14:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.615 14:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.615 BaseBdev3_malloc 00:09:28.615 14:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.615 14:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:28.615 14:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.615 14:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.615 true 00:09:28.615 14:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.616 14:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:28.616 14:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.616 14:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.616 [2024-12-09 14:42:06.590930] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:28.616 [2024-12-09 14:42:06.591099] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.616 [2024-12-09 14:42:06.591133] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:28.616 [2024-12-09 14:42:06.591146] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.616 [2024-12-09 14:42:06.593495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.616 [2024-12-09 14:42:06.593543] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:28.616 BaseBdev3 00:09:28.616 14:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.616 14:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:28.616 14:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.616 14:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.616 [2024-12-09 14:42:06.603035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:28.616 [2024-12-09 14:42:06.605072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:28.616 [2024-12-09 14:42:06.605157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:28.616 [2024-12-09 14:42:06.605399] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:28.616 [2024-12-09 14:42:06.605414] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:28.616 [2024-12-09 14:42:06.605723] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:28.616 [2024-12-09 14:42:06.605915] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:28.616 [2024-12-09 14:42:06.605935] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:28.616 [2024-12-09 14:42:06.606121] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:28.616 14:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.616 14:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:28.616 14:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:28.616 14:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:28.616 14:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.616 14:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.616 14:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.616 14:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.616 14:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.616 14:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.616 14:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.616 14:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.616 14:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.616 14:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:28.616 14:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.616 14:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.616 14:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.616 "name": "raid_bdev1", 00:09:28.616 "uuid": "e7691aac-eeb4-4470-9c0a-774921b9267b", 00:09:28.616 "strip_size_kb": 64, 00:09:28.616 "state": "online", 00:09:28.616 "raid_level": "concat", 00:09:28.616 "superblock": true, 00:09:28.616 "num_base_bdevs": 3, 00:09:28.616 "num_base_bdevs_discovered": 3, 00:09:28.616 "num_base_bdevs_operational": 3, 00:09:28.616 "base_bdevs_list": [ 00:09:28.616 { 00:09:28.616 "name": "BaseBdev1", 00:09:28.616 "uuid": "816a0624-f86f-5d2a-b4ae-0751daa4b7ff", 00:09:28.616 "is_configured": true, 00:09:28.616 "data_offset": 2048, 00:09:28.616 "data_size": 63488 00:09:28.616 }, 00:09:28.616 { 00:09:28.616 "name": "BaseBdev2", 00:09:28.616 "uuid": "525bce57-315e-5c04-8518-bdf4cc70b6bc", 00:09:28.616 "is_configured": true, 00:09:28.616 "data_offset": 2048, 00:09:28.616 "data_size": 63488 00:09:28.616 }, 00:09:28.616 { 00:09:28.616 "name": "BaseBdev3", 00:09:28.616 "uuid": "3b1ae8cb-2aef-511c-a7cb-26cab47882da", 00:09:28.616 "is_configured": true, 00:09:28.616 "data_offset": 2048, 00:09:28.616 "data_size": 63488 00:09:28.616 } 00:09:28.616 ] 00:09:28.616 }' 00:09:28.616 14:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.616 14:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.186 14:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:29.186 14:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:29.186 [2024-12-09 14:42:07.147376] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:30.122 14:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:30.122 14:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.122 14:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.122 14:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.122 14:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:30.122 14:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:30.122 14:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:30.122 14:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:30.122 14:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:30.122 14:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:30.122 14:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.122 14:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.122 14:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.122 14:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.122 14:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.122 14:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.122 14:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.122 14:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.122 14:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:30.123 14:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.123 14:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.123 14:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.123 14:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.123 "name": "raid_bdev1", 00:09:30.123 "uuid": "e7691aac-eeb4-4470-9c0a-774921b9267b", 00:09:30.123 "strip_size_kb": 64, 00:09:30.123 "state": "online", 00:09:30.123 "raid_level": "concat", 00:09:30.123 "superblock": true, 00:09:30.123 "num_base_bdevs": 3, 00:09:30.123 "num_base_bdevs_discovered": 3, 00:09:30.123 "num_base_bdevs_operational": 3, 00:09:30.123 "base_bdevs_list": [ 00:09:30.123 { 00:09:30.123 "name": "BaseBdev1", 00:09:30.123 "uuid": "816a0624-f86f-5d2a-b4ae-0751daa4b7ff", 00:09:30.123 "is_configured": true, 00:09:30.123 "data_offset": 2048, 00:09:30.123 "data_size": 63488 00:09:30.123 }, 00:09:30.123 { 00:09:30.123 "name": "BaseBdev2", 00:09:30.123 "uuid": "525bce57-315e-5c04-8518-bdf4cc70b6bc", 00:09:30.123 "is_configured": true, 00:09:30.123 "data_offset": 2048, 00:09:30.123 "data_size": 63488 00:09:30.123 }, 00:09:30.123 { 00:09:30.123 "name": "BaseBdev3", 00:09:30.123 "uuid": "3b1ae8cb-2aef-511c-a7cb-26cab47882da", 00:09:30.123 "is_configured": true, 00:09:30.123 "data_offset": 2048, 00:09:30.123 "data_size": 63488 00:09:30.123 } 00:09:30.123 ] 00:09:30.123 }' 00:09:30.123 14:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.123 14:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.382 14:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:30.382 14:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.382 14:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.382 [2024-12-09 14:42:08.495894] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:30.382 [2024-12-09 14:42:08.496004] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:30.382 [2024-12-09 14:42:08.498970] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:30.382 [2024-12-09 14:42:08.499084] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:30.382 [2024-12-09 14:42:08.499162] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:30.383 [2024-12-09 14:42:08.499215] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:30.383 { 00:09:30.383 "results": [ 00:09:30.383 { 00:09:30.383 "job": "raid_bdev1", 00:09:30.383 "core_mask": "0x1", 00:09:30.383 "workload": "randrw", 00:09:30.383 "percentage": 50, 00:09:30.383 "status": "finished", 00:09:30.383 "queue_depth": 1, 00:09:30.383 "io_size": 131072, 00:09:30.383 "runtime": 1.349448, 00:09:30.383 "iops": 14761.591406263895, 00:09:30.383 "mibps": 1845.1989257829869, 00:09:30.383 "io_failed": 1, 00:09:30.383 "io_timeout": 0, 00:09:30.383 "avg_latency_us": 93.93393862087122, 00:09:30.383 "min_latency_us": 26.606113537117903, 00:09:30.383 "max_latency_us": 1638.4 00:09:30.383 } 00:09:30.383 ], 00:09:30.383 "core_count": 1 00:09:30.383 } 00:09:30.383 14:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.383 14:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 68532 00:09:30.383 14:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 68532 ']' 00:09:30.383 14:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 68532 00:09:30.643 14:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:30.643 14:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:30.643 14:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68532 00:09:30.643 14:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:30.643 killing process with pid 68532 00:09:30.643 14:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:30.643 14:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68532' 00:09:30.643 14:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 68532 00:09:30.643 [2024-12-09 14:42:08.545321] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:30.643 14:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 68532 00:09:30.902 [2024-12-09 14:42:08.780475] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:32.282 14:42:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.UkeWDZlteP 00:09:32.282 14:42:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:32.282 14:42:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:32.282 14:42:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:09:32.282 14:42:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:32.282 ************************************ 00:09:32.282 END TEST raid_write_error_test 00:09:32.282 ************************************ 00:09:32.282 14:42:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:32.282 14:42:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:32.282 14:42:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:09:32.282 00:09:32.282 real 0m4.567s 00:09:32.282 user 0m5.404s 00:09:32.282 sys 0m0.574s 00:09:32.282 14:42:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.282 14:42:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.282 14:42:10 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:32.282 14:42:10 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:32.282 14:42:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:32.282 14:42:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.282 14:42:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:32.282 ************************************ 00:09:32.282 START TEST raid_state_function_test 00:09:32.282 ************************************ 00:09:32.282 14:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:09:32.282 14:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:32.282 14:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:32.282 14:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:32.282 14:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:32.282 14:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:32.282 14:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:32.282 14:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:32.282 14:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:32.282 14:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:32.282 14:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:32.282 14:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:32.282 14:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:32.282 14:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:32.282 14:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:32.282 14:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:32.282 14:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:32.282 14:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:32.282 14:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:32.282 14:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:32.282 14:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:32.282 14:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:32.282 14:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:32.282 14:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:32.282 14:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:32.282 14:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:32.282 14:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=68670 00:09:32.282 14:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:32.282 14:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68670' 00:09:32.282 Process raid pid: 68670 00:09:32.282 14:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 68670 00:09:32.282 14:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 68670 ']' 00:09:32.282 14:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.282 14:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:32.282 14:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.282 14:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:32.282 14:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.282 [2024-12-09 14:42:10.171258] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:09:32.282 [2024-12-09 14:42:10.171462] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.282 [2024-12-09 14:42:10.346711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.541 [2024-12-09 14:42:10.468518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.801 [2024-12-09 14:42:10.679026] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.801 [2024-12-09 14:42:10.679161] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.060 14:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:33.060 14:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:33.060 14:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:33.060 14:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.060 14:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.060 [2024-12-09 14:42:11.014043] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:33.060 [2024-12-09 14:42:11.014102] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:33.060 [2024-12-09 14:42:11.014112] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:33.060 [2024-12-09 14:42:11.014121] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:33.060 [2024-12-09 14:42:11.014127] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:33.060 [2024-12-09 14:42:11.014137] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:33.060 14:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.060 14:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:33.060 14:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.060 14:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.060 14:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.060 14:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.060 14:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.060 14:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.060 14:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.060 14:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.060 14:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.060 14:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.060 14:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.060 14:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.061 14:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.061 14:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.061 14:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.061 "name": "Existed_Raid", 00:09:33.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.061 "strip_size_kb": 0, 00:09:33.061 "state": "configuring", 00:09:33.061 "raid_level": "raid1", 00:09:33.061 "superblock": false, 00:09:33.061 "num_base_bdevs": 3, 00:09:33.061 "num_base_bdevs_discovered": 0, 00:09:33.061 "num_base_bdevs_operational": 3, 00:09:33.061 "base_bdevs_list": [ 00:09:33.061 { 00:09:33.061 "name": "BaseBdev1", 00:09:33.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.061 "is_configured": false, 00:09:33.061 "data_offset": 0, 00:09:33.061 "data_size": 0 00:09:33.061 }, 00:09:33.061 { 00:09:33.061 "name": "BaseBdev2", 00:09:33.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.061 "is_configured": false, 00:09:33.061 "data_offset": 0, 00:09:33.061 "data_size": 0 00:09:33.061 }, 00:09:33.061 { 00:09:33.061 "name": "BaseBdev3", 00:09:33.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.061 "is_configured": false, 00:09:33.061 "data_offset": 0, 00:09:33.061 "data_size": 0 00:09:33.061 } 00:09:33.061 ] 00:09:33.061 }' 00:09:33.061 14:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.061 14:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.631 14:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:33.631 14:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.631 14:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.631 [2024-12-09 14:42:11.469220] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:33.631 [2024-12-09 14:42:11.469315] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:33.631 14:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.631 14:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:33.631 14:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.631 14:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.631 [2024-12-09 14:42:11.481173] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:33.631 [2024-12-09 14:42:11.481279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:33.631 [2024-12-09 14:42:11.481316] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:33.631 [2024-12-09 14:42:11.481343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:33.631 [2024-12-09 14:42:11.481364] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:33.631 [2024-12-09 14:42:11.481388] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:33.631 14:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.631 14:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:33.631 14:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.631 14:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.631 [2024-12-09 14:42:11.531459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:33.631 BaseBdev1 00:09:33.631 14:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.631 14:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:33.631 14:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:33.631 14:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:33.631 14:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:33.631 14:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:33.631 14:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:33.631 14:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:33.631 14:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.631 14:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.631 14:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.631 14:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:33.631 14:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.631 14:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.631 [ 00:09:33.631 { 00:09:33.631 "name": "BaseBdev1", 00:09:33.631 "aliases": [ 00:09:33.631 "044a3bb2-cfc7-4424-af4d-e3b4d0aefac5" 00:09:33.631 ], 00:09:33.631 "product_name": "Malloc disk", 00:09:33.631 "block_size": 512, 00:09:33.631 "num_blocks": 65536, 00:09:33.631 "uuid": "044a3bb2-cfc7-4424-af4d-e3b4d0aefac5", 00:09:33.631 "assigned_rate_limits": { 00:09:33.631 "rw_ios_per_sec": 0, 00:09:33.631 "rw_mbytes_per_sec": 0, 00:09:33.631 "r_mbytes_per_sec": 0, 00:09:33.631 "w_mbytes_per_sec": 0 00:09:33.631 }, 00:09:33.631 "claimed": true, 00:09:33.631 "claim_type": "exclusive_write", 00:09:33.631 "zoned": false, 00:09:33.631 "supported_io_types": { 00:09:33.631 "read": true, 00:09:33.631 "write": true, 00:09:33.631 "unmap": true, 00:09:33.631 "flush": true, 00:09:33.631 "reset": true, 00:09:33.631 "nvme_admin": false, 00:09:33.631 "nvme_io": false, 00:09:33.631 "nvme_io_md": false, 00:09:33.631 "write_zeroes": true, 00:09:33.631 "zcopy": true, 00:09:33.631 "get_zone_info": false, 00:09:33.631 "zone_management": false, 00:09:33.631 "zone_append": false, 00:09:33.631 "compare": false, 00:09:33.631 "compare_and_write": false, 00:09:33.631 "abort": true, 00:09:33.631 "seek_hole": false, 00:09:33.631 "seek_data": false, 00:09:33.631 "copy": true, 00:09:33.631 "nvme_iov_md": false 00:09:33.631 }, 00:09:33.631 "memory_domains": [ 00:09:33.631 { 00:09:33.631 "dma_device_id": "system", 00:09:33.631 "dma_device_type": 1 00:09:33.631 }, 00:09:33.631 { 00:09:33.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.632 "dma_device_type": 2 00:09:33.632 } 00:09:33.632 ], 00:09:33.632 "driver_specific": {} 00:09:33.632 } 00:09:33.632 ] 00:09:33.632 14:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.632 14:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:33.632 14:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:33.632 14:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.632 14:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.632 14:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.632 14:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.632 14:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.632 14:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.632 14:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.632 14:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.632 14:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.632 14:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.632 14:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.632 14:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.632 14:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.632 14:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.632 14:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.632 "name": "Existed_Raid", 00:09:33.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.632 "strip_size_kb": 0, 00:09:33.632 "state": "configuring", 00:09:33.632 "raid_level": "raid1", 00:09:33.632 "superblock": false, 00:09:33.632 "num_base_bdevs": 3, 00:09:33.632 "num_base_bdevs_discovered": 1, 00:09:33.632 "num_base_bdevs_operational": 3, 00:09:33.632 "base_bdevs_list": [ 00:09:33.632 { 00:09:33.632 "name": "BaseBdev1", 00:09:33.632 "uuid": "044a3bb2-cfc7-4424-af4d-e3b4d0aefac5", 00:09:33.632 "is_configured": true, 00:09:33.632 "data_offset": 0, 00:09:33.632 "data_size": 65536 00:09:33.632 }, 00:09:33.632 { 00:09:33.632 "name": "BaseBdev2", 00:09:33.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.632 "is_configured": false, 00:09:33.632 "data_offset": 0, 00:09:33.632 "data_size": 0 00:09:33.632 }, 00:09:33.632 { 00:09:33.632 "name": "BaseBdev3", 00:09:33.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.632 "is_configured": false, 00:09:33.632 "data_offset": 0, 00:09:33.632 "data_size": 0 00:09:33.632 } 00:09:33.632 ] 00:09:33.632 }' 00:09:33.632 14:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.632 14:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.202 14:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:34.202 14:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.202 14:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.202 [2024-12-09 14:42:12.066637] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:34.202 [2024-12-09 14:42:12.066766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:34.202 14:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.202 14:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:34.202 14:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.202 14:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.202 [2024-12-09 14:42:12.078642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:34.202 [2024-12-09 14:42:12.080608] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:34.202 [2024-12-09 14:42:12.080652] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:34.202 [2024-12-09 14:42:12.080662] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:34.202 [2024-12-09 14:42:12.080671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:34.202 14:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.202 14:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:34.202 14:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:34.202 14:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:34.202 14:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.202 14:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.202 14:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.202 14:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.202 14:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.202 14:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.202 14:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.202 14:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.202 14:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.202 14:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.202 14:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.202 14:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.202 14:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.202 14:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.202 14:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.202 "name": "Existed_Raid", 00:09:34.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.202 "strip_size_kb": 0, 00:09:34.202 "state": "configuring", 00:09:34.202 "raid_level": "raid1", 00:09:34.202 "superblock": false, 00:09:34.202 "num_base_bdevs": 3, 00:09:34.202 "num_base_bdevs_discovered": 1, 00:09:34.202 "num_base_bdevs_operational": 3, 00:09:34.202 "base_bdevs_list": [ 00:09:34.202 { 00:09:34.202 "name": "BaseBdev1", 00:09:34.202 "uuid": "044a3bb2-cfc7-4424-af4d-e3b4d0aefac5", 00:09:34.202 "is_configured": true, 00:09:34.202 "data_offset": 0, 00:09:34.202 "data_size": 65536 00:09:34.202 }, 00:09:34.202 { 00:09:34.202 "name": "BaseBdev2", 00:09:34.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.202 "is_configured": false, 00:09:34.202 "data_offset": 0, 00:09:34.202 "data_size": 0 00:09:34.202 }, 00:09:34.202 { 00:09:34.202 "name": "BaseBdev3", 00:09:34.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.202 "is_configured": false, 00:09:34.202 "data_offset": 0, 00:09:34.202 "data_size": 0 00:09:34.202 } 00:09:34.202 ] 00:09:34.202 }' 00:09:34.202 14:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.202 14:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.462 14:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:34.462 14:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.462 14:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.729 [2024-12-09 14:42:12.591282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:34.729 BaseBdev2 00:09:34.729 14:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.729 14:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:34.729 14:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:34.729 14:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:34.729 14:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:34.729 14:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:34.729 14:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:34.729 14:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:34.729 14:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.729 14:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.729 14:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.729 14:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:34.729 14:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.729 14:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.729 [ 00:09:34.729 { 00:09:34.729 "name": "BaseBdev2", 00:09:34.729 "aliases": [ 00:09:34.729 "e59fd898-9f3e-4643-9b36-9716fa280a10" 00:09:34.729 ], 00:09:34.729 "product_name": "Malloc disk", 00:09:34.729 "block_size": 512, 00:09:34.729 "num_blocks": 65536, 00:09:34.729 "uuid": "e59fd898-9f3e-4643-9b36-9716fa280a10", 00:09:34.729 "assigned_rate_limits": { 00:09:34.729 "rw_ios_per_sec": 0, 00:09:34.729 "rw_mbytes_per_sec": 0, 00:09:34.729 "r_mbytes_per_sec": 0, 00:09:34.729 "w_mbytes_per_sec": 0 00:09:34.729 }, 00:09:34.729 "claimed": true, 00:09:34.729 "claim_type": "exclusive_write", 00:09:34.729 "zoned": false, 00:09:34.729 "supported_io_types": { 00:09:34.729 "read": true, 00:09:34.729 "write": true, 00:09:34.729 "unmap": true, 00:09:34.729 "flush": true, 00:09:34.729 "reset": true, 00:09:34.729 "nvme_admin": false, 00:09:34.729 "nvme_io": false, 00:09:34.729 "nvme_io_md": false, 00:09:34.729 "write_zeroes": true, 00:09:34.729 "zcopy": true, 00:09:34.729 "get_zone_info": false, 00:09:34.729 "zone_management": false, 00:09:34.729 "zone_append": false, 00:09:34.729 "compare": false, 00:09:34.729 "compare_and_write": false, 00:09:34.729 "abort": true, 00:09:34.729 "seek_hole": false, 00:09:34.729 "seek_data": false, 00:09:34.729 "copy": true, 00:09:34.729 "nvme_iov_md": false 00:09:34.729 }, 00:09:34.729 "memory_domains": [ 00:09:34.729 { 00:09:34.729 "dma_device_id": "system", 00:09:34.729 "dma_device_type": 1 00:09:34.729 }, 00:09:34.729 { 00:09:34.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.729 "dma_device_type": 2 00:09:34.729 } 00:09:34.729 ], 00:09:34.729 "driver_specific": {} 00:09:34.729 } 00:09:34.729 ] 00:09:34.729 14:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.729 14:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:34.729 14:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:34.729 14:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:34.729 14:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:34.729 14:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.729 14:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.729 14:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.729 14:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.729 14:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.729 14:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.729 14:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.729 14:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.729 14:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.729 14:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.729 14:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.729 14:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.729 14:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.730 14:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.730 14:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.730 "name": "Existed_Raid", 00:09:34.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.730 "strip_size_kb": 0, 00:09:34.730 "state": "configuring", 00:09:34.730 "raid_level": "raid1", 00:09:34.730 "superblock": false, 00:09:34.730 "num_base_bdevs": 3, 00:09:34.730 "num_base_bdevs_discovered": 2, 00:09:34.730 "num_base_bdevs_operational": 3, 00:09:34.730 "base_bdevs_list": [ 00:09:34.730 { 00:09:34.730 "name": "BaseBdev1", 00:09:34.730 "uuid": "044a3bb2-cfc7-4424-af4d-e3b4d0aefac5", 00:09:34.730 "is_configured": true, 00:09:34.730 "data_offset": 0, 00:09:34.730 "data_size": 65536 00:09:34.730 }, 00:09:34.730 { 00:09:34.730 "name": "BaseBdev2", 00:09:34.730 "uuid": "e59fd898-9f3e-4643-9b36-9716fa280a10", 00:09:34.730 "is_configured": true, 00:09:34.730 "data_offset": 0, 00:09:34.730 "data_size": 65536 00:09:34.730 }, 00:09:34.730 { 00:09:34.730 "name": "BaseBdev3", 00:09:34.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.730 "is_configured": false, 00:09:34.730 "data_offset": 0, 00:09:34.730 "data_size": 0 00:09:34.730 } 00:09:34.730 ] 00:09:34.730 }' 00:09:34.730 14:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.730 14:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.016 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:35.016 14:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.016 14:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.016 [2024-12-09 14:42:13.100898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:35.016 [2024-12-09 14:42:13.100958] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:35.016 [2024-12-09 14:42:13.100973] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:35.016 [2024-12-09 14:42:13.101271] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:35.016 [2024-12-09 14:42:13.101464] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:35.016 [2024-12-09 14:42:13.101474] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:35.016 [2024-12-09 14:42:13.101808] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.016 BaseBdev3 00:09:35.016 14:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.016 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:35.016 14:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:35.016 14:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:35.016 14:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:35.016 14:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:35.016 14:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:35.016 14:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:35.016 14:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.016 14:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.016 14:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.016 14:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:35.016 14:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.016 14:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.016 [ 00:09:35.016 { 00:09:35.016 "name": "BaseBdev3", 00:09:35.016 "aliases": [ 00:09:35.016 "e91b2b07-786a-4088-927d-62f461564e23" 00:09:35.016 ], 00:09:35.016 "product_name": "Malloc disk", 00:09:35.016 "block_size": 512, 00:09:35.016 "num_blocks": 65536, 00:09:35.016 "uuid": "e91b2b07-786a-4088-927d-62f461564e23", 00:09:35.016 "assigned_rate_limits": { 00:09:35.016 "rw_ios_per_sec": 0, 00:09:35.016 "rw_mbytes_per_sec": 0, 00:09:35.016 "r_mbytes_per_sec": 0, 00:09:35.016 "w_mbytes_per_sec": 0 00:09:35.016 }, 00:09:35.016 "claimed": true, 00:09:35.016 "claim_type": "exclusive_write", 00:09:35.016 "zoned": false, 00:09:35.016 "supported_io_types": { 00:09:35.016 "read": true, 00:09:35.016 "write": true, 00:09:35.016 "unmap": true, 00:09:35.016 "flush": true, 00:09:35.016 "reset": true, 00:09:35.016 "nvme_admin": false, 00:09:35.016 "nvme_io": false, 00:09:35.016 "nvme_io_md": false, 00:09:35.016 "write_zeroes": true, 00:09:35.016 "zcopy": true, 00:09:35.016 "get_zone_info": false, 00:09:35.016 "zone_management": false, 00:09:35.016 "zone_append": false, 00:09:35.016 "compare": false, 00:09:35.016 "compare_and_write": false, 00:09:35.016 "abort": true, 00:09:35.016 "seek_hole": false, 00:09:35.016 "seek_data": false, 00:09:35.016 "copy": true, 00:09:35.016 "nvme_iov_md": false 00:09:35.016 }, 00:09:35.016 "memory_domains": [ 00:09:35.016 { 00:09:35.016 "dma_device_id": "system", 00:09:35.016 "dma_device_type": 1 00:09:35.016 }, 00:09:35.016 { 00:09:35.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.016 "dma_device_type": 2 00:09:35.016 } 00:09:35.016 ], 00:09:35.016 "driver_specific": {} 00:09:35.016 } 00:09:35.016 ] 00:09:35.016 14:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.016 14:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:35.016 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:35.016 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:35.016 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:35.016 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.016 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:35.016 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.016 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.016 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.016 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.016 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.016 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.016 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.016 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.016 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.016 14:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.276 14:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.276 14:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.276 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.276 "name": "Existed_Raid", 00:09:35.276 "uuid": "e649b238-61a3-4259-8a88-a6ea3aff719b", 00:09:35.276 "strip_size_kb": 0, 00:09:35.276 "state": "online", 00:09:35.276 "raid_level": "raid1", 00:09:35.276 "superblock": false, 00:09:35.276 "num_base_bdevs": 3, 00:09:35.276 "num_base_bdevs_discovered": 3, 00:09:35.276 "num_base_bdevs_operational": 3, 00:09:35.276 "base_bdevs_list": [ 00:09:35.276 { 00:09:35.276 "name": "BaseBdev1", 00:09:35.276 "uuid": "044a3bb2-cfc7-4424-af4d-e3b4d0aefac5", 00:09:35.276 "is_configured": true, 00:09:35.276 "data_offset": 0, 00:09:35.276 "data_size": 65536 00:09:35.276 }, 00:09:35.276 { 00:09:35.276 "name": "BaseBdev2", 00:09:35.276 "uuid": "e59fd898-9f3e-4643-9b36-9716fa280a10", 00:09:35.276 "is_configured": true, 00:09:35.276 "data_offset": 0, 00:09:35.276 "data_size": 65536 00:09:35.276 }, 00:09:35.276 { 00:09:35.276 "name": "BaseBdev3", 00:09:35.276 "uuid": "e91b2b07-786a-4088-927d-62f461564e23", 00:09:35.276 "is_configured": true, 00:09:35.276 "data_offset": 0, 00:09:35.276 "data_size": 65536 00:09:35.276 } 00:09:35.276 ] 00:09:35.276 }' 00:09:35.276 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.276 14:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.536 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:35.536 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:35.536 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:35.537 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:35.537 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:35.537 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:35.537 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:35.537 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:35.537 14:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.537 14:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.537 [2024-12-09 14:42:13.604450] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:35.537 14:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.537 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:35.537 "name": "Existed_Raid", 00:09:35.537 "aliases": [ 00:09:35.537 "e649b238-61a3-4259-8a88-a6ea3aff719b" 00:09:35.537 ], 00:09:35.537 "product_name": "Raid Volume", 00:09:35.537 "block_size": 512, 00:09:35.537 "num_blocks": 65536, 00:09:35.537 "uuid": "e649b238-61a3-4259-8a88-a6ea3aff719b", 00:09:35.537 "assigned_rate_limits": { 00:09:35.537 "rw_ios_per_sec": 0, 00:09:35.537 "rw_mbytes_per_sec": 0, 00:09:35.537 "r_mbytes_per_sec": 0, 00:09:35.537 "w_mbytes_per_sec": 0 00:09:35.537 }, 00:09:35.537 "claimed": false, 00:09:35.537 "zoned": false, 00:09:35.537 "supported_io_types": { 00:09:35.537 "read": true, 00:09:35.537 "write": true, 00:09:35.537 "unmap": false, 00:09:35.537 "flush": false, 00:09:35.537 "reset": true, 00:09:35.537 "nvme_admin": false, 00:09:35.537 "nvme_io": false, 00:09:35.537 "nvme_io_md": false, 00:09:35.537 "write_zeroes": true, 00:09:35.537 "zcopy": false, 00:09:35.537 "get_zone_info": false, 00:09:35.537 "zone_management": false, 00:09:35.537 "zone_append": false, 00:09:35.537 "compare": false, 00:09:35.537 "compare_and_write": false, 00:09:35.537 "abort": false, 00:09:35.537 "seek_hole": false, 00:09:35.537 "seek_data": false, 00:09:35.537 "copy": false, 00:09:35.537 "nvme_iov_md": false 00:09:35.537 }, 00:09:35.537 "memory_domains": [ 00:09:35.537 { 00:09:35.537 "dma_device_id": "system", 00:09:35.537 "dma_device_type": 1 00:09:35.537 }, 00:09:35.537 { 00:09:35.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.537 "dma_device_type": 2 00:09:35.537 }, 00:09:35.537 { 00:09:35.537 "dma_device_id": "system", 00:09:35.537 "dma_device_type": 1 00:09:35.537 }, 00:09:35.537 { 00:09:35.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.537 "dma_device_type": 2 00:09:35.537 }, 00:09:35.537 { 00:09:35.537 "dma_device_id": "system", 00:09:35.537 "dma_device_type": 1 00:09:35.537 }, 00:09:35.537 { 00:09:35.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.537 "dma_device_type": 2 00:09:35.537 } 00:09:35.537 ], 00:09:35.537 "driver_specific": { 00:09:35.537 "raid": { 00:09:35.537 "uuid": "e649b238-61a3-4259-8a88-a6ea3aff719b", 00:09:35.537 "strip_size_kb": 0, 00:09:35.537 "state": "online", 00:09:35.537 "raid_level": "raid1", 00:09:35.537 "superblock": false, 00:09:35.537 "num_base_bdevs": 3, 00:09:35.537 "num_base_bdevs_discovered": 3, 00:09:35.537 "num_base_bdevs_operational": 3, 00:09:35.537 "base_bdevs_list": [ 00:09:35.537 { 00:09:35.537 "name": "BaseBdev1", 00:09:35.537 "uuid": "044a3bb2-cfc7-4424-af4d-e3b4d0aefac5", 00:09:35.537 "is_configured": true, 00:09:35.537 "data_offset": 0, 00:09:35.537 "data_size": 65536 00:09:35.537 }, 00:09:35.537 { 00:09:35.537 "name": "BaseBdev2", 00:09:35.537 "uuid": "e59fd898-9f3e-4643-9b36-9716fa280a10", 00:09:35.537 "is_configured": true, 00:09:35.537 "data_offset": 0, 00:09:35.537 "data_size": 65536 00:09:35.537 }, 00:09:35.537 { 00:09:35.537 "name": "BaseBdev3", 00:09:35.537 "uuid": "e91b2b07-786a-4088-927d-62f461564e23", 00:09:35.537 "is_configured": true, 00:09:35.537 "data_offset": 0, 00:09:35.537 "data_size": 65536 00:09:35.537 } 00:09:35.537 ] 00:09:35.537 } 00:09:35.537 } 00:09:35.537 }' 00:09:35.537 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:35.797 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:35.797 BaseBdev2 00:09:35.797 BaseBdev3' 00:09:35.797 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.797 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:35.797 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.797 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:35.797 14:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.797 14:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.797 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.797 14:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.797 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.797 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.797 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.797 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:35.797 14:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.797 14:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.797 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.797 14:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.797 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.797 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.797 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.797 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.797 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:35.797 14:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.797 14:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.797 14:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.797 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.797 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.798 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:35.798 14:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.798 14:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.798 [2024-12-09 14:42:13.843812] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:36.057 14:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.057 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:36.057 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:36.057 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:36.057 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:36.057 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:36.057 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:36.057 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.057 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:36.057 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.057 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.057 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:36.057 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.057 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.057 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.057 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.057 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.057 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.057 14:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.057 14:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.057 14:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.058 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.058 "name": "Existed_Raid", 00:09:36.058 "uuid": "e649b238-61a3-4259-8a88-a6ea3aff719b", 00:09:36.058 "strip_size_kb": 0, 00:09:36.058 "state": "online", 00:09:36.058 "raid_level": "raid1", 00:09:36.058 "superblock": false, 00:09:36.058 "num_base_bdevs": 3, 00:09:36.058 "num_base_bdevs_discovered": 2, 00:09:36.058 "num_base_bdevs_operational": 2, 00:09:36.058 "base_bdevs_list": [ 00:09:36.058 { 00:09:36.058 "name": null, 00:09:36.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.058 "is_configured": false, 00:09:36.058 "data_offset": 0, 00:09:36.058 "data_size": 65536 00:09:36.058 }, 00:09:36.058 { 00:09:36.058 "name": "BaseBdev2", 00:09:36.058 "uuid": "e59fd898-9f3e-4643-9b36-9716fa280a10", 00:09:36.058 "is_configured": true, 00:09:36.058 "data_offset": 0, 00:09:36.058 "data_size": 65536 00:09:36.058 }, 00:09:36.058 { 00:09:36.058 "name": "BaseBdev3", 00:09:36.058 "uuid": "e91b2b07-786a-4088-927d-62f461564e23", 00:09:36.058 "is_configured": true, 00:09:36.058 "data_offset": 0, 00:09:36.058 "data_size": 65536 00:09:36.058 } 00:09:36.058 ] 00:09:36.058 }' 00:09:36.058 14:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.058 14:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.317 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:36.317 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:36.317 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.317 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.317 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.317 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:36.317 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.317 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:36.317 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:36.317 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:36.317 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.317 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.576 [2024-12-09 14:42:14.436231] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:36.576 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.576 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:36.576 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:36.576 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.576 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.576 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.576 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:36.576 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.576 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:36.576 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:36.576 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:36.576 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.576 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.576 [2024-12-09 14:42:14.588345] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:36.576 [2024-12-09 14:42:14.588447] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:36.576 [2024-12-09 14:42:14.685207] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:36.576 [2024-12-09 14:42:14.685318] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:36.576 [2024-12-09 14:42:14.685367] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:36.576 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.576 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:36.576 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:36.576 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.576 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.577 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.577 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:36.577 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.836 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:36.836 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:36.836 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:36.836 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:36.836 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:36.836 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:36.836 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.836 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.836 BaseBdev2 00:09:36.836 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.836 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:36.836 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:36.836 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:36.836 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:36.836 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:36.836 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:36.836 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:36.836 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.836 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.836 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.837 [ 00:09:36.837 { 00:09:36.837 "name": "BaseBdev2", 00:09:36.837 "aliases": [ 00:09:36.837 "b41843e5-02d5-4935-b227-dea3e25e4ec4" 00:09:36.837 ], 00:09:36.837 "product_name": "Malloc disk", 00:09:36.837 "block_size": 512, 00:09:36.837 "num_blocks": 65536, 00:09:36.837 "uuid": "b41843e5-02d5-4935-b227-dea3e25e4ec4", 00:09:36.837 "assigned_rate_limits": { 00:09:36.837 "rw_ios_per_sec": 0, 00:09:36.837 "rw_mbytes_per_sec": 0, 00:09:36.837 "r_mbytes_per_sec": 0, 00:09:36.837 "w_mbytes_per_sec": 0 00:09:36.837 }, 00:09:36.837 "claimed": false, 00:09:36.837 "zoned": false, 00:09:36.837 "supported_io_types": { 00:09:36.837 "read": true, 00:09:36.837 "write": true, 00:09:36.837 "unmap": true, 00:09:36.837 "flush": true, 00:09:36.837 "reset": true, 00:09:36.837 "nvme_admin": false, 00:09:36.837 "nvme_io": false, 00:09:36.837 "nvme_io_md": false, 00:09:36.837 "write_zeroes": true, 00:09:36.837 "zcopy": true, 00:09:36.837 "get_zone_info": false, 00:09:36.837 "zone_management": false, 00:09:36.837 "zone_append": false, 00:09:36.837 "compare": false, 00:09:36.837 "compare_and_write": false, 00:09:36.837 "abort": true, 00:09:36.837 "seek_hole": false, 00:09:36.837 "seek_data": false, 00:09:36.837 "copy": true, 00:09:36.837 "nvme_iov_md": false 00:09:36.837 }, 00:09:36.837 "memory_domains": [ 00:09:36.837 { 00:09:36.837 "dma_device_id": "system", 00:09:36.837 "dma_device_type": 1 00:09:36.837 }, 00:09:36.837 { 00:09:36.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.837 "dma_device_type": 2 00:09:36.837 } 00:09:36.837 ], 00:09:36.837 "driver_specific": {} 00:09:36.837 } 00:09:36.837 ] 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.837 BaseBdev3 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.837 [ 00:09:36.837 { 00:09:36.837 "name": "BaseBdev3", 00:09:36.837 "aliases": [ 00:09:36.837 "cd14c663-8671-4c82-91c0-53aa2af5e3e5" 00:09:36.837 ], 00:09:36.837 "product_name": "Malloc disk", 00:09:36.837 "block_size": 512, 00:09:36.837 "num_blocks": 65536, 00:09:36.837 "uuid": "cd14c663-8671-4c82-91c0-53aa2af5e3e5", 00:09:36.837 "assigned_rate_limits": { 00:09:36.837 "rw_ios_per_sec": 0, 00:09:36.837 "rw_mbytes_per_sec": 0, 00:09:36.837 "r_mbytes_per_sec": 0, 00:09:36.837 "w_mbytes_per_sec": 0 00:09:36.837 }, 00:09:36.837 "claimed": false, 00:09:36.837 "zoned": false, 00:09:36.837 "supported_io_types": { 00:09:36.837 "read": true, 00:09:36.837 "write": true, 00:09:36.837 "unmap": true, 00:09:36.837 "flush": true, 00:09:36.837 "reset": true, 00:09:36.837 "nvme_admin": false, 00:09:36.837 "nvme_io": false, 00:09:36.837 "nvme_io_md": false, 00:09:36.837 "write_zeroes": true, 00:09:36.837 "zcopy": true, 00:09:36.837 "get_zone_info": false, 00:09:36.837 "zone_management": false, 00:09:36.837 "zone_append": false, 00:09:36.837 "compare": false, 00:09:36.837 "compare_and_write": false, 00:09:36.837 "abort": true, 00:09:36.837 "seek_hole": false, 00:09:36.837 "seek_data": false, 00:09:36.837 "copy": true, 00:09:36.837 "nvme_iov_md": false 00:09:36.837 }, 00:09:36.837 "memory_domains": [ 00:09:36.837 { 00:09:36.837 "dma_device_id": "system", 00:09:36.837 "dma_device_type": 1 00:09:36.837 }, 00:09:36.837 { 00:09:36.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.837 "dma_device_type": 2 00:09:36.837 } 00:09:36.837 ], 00:09:36.837 "driver_specific": {} 00:09:36.837 } 00:09:36.837 ] 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.837 [2024-12-09 14:42:14.881590] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:36.837 [2024-12-09 14:42:14.881704] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:36.837 [2024-12-09 14:42:14.881751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:36.837 [2024-12-09 14:42:14.883906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.837 "name": "Existed_Raid", 00:09:36.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.837 "strip_size_kb": 0, 00:09:36.837 "state": "configuring", 00:09:36.837 "raid_level": "raid1", 00:09:36.837 "superblock": false, 00:09:36.837 "num_base_bdevs": 3, 00:09:36.837 "num_base_bdevs_discovered": 2, 00:09:36.837 "num_base_bdevs_operational": 3, 00:09:36.837 "base_bdevs_list": [ 00:09:36.837 { 00:09:36.837 "name": "BaseBdev1", 00:09:36.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.837 "is_configured": false, 00:09:36.837 "data_offset": 0, 00:09:36.837 "data_size": 0 00:09:36.837 }, 00:09:36.837 { 00:09:36.837 "name": "BaseBdev2", 00:09:36.837 "uuid": "b41843e5-02d5-4935-b227-dea3e25e4ec4", 00:09:36.837 "is_configured": true, 00:09:36.837 "data_offset": 0, 00:09:36.837 "data_size": 65536 00:09:36.837 }, 00:09:36.837 { 00:09:36.837 "name": "BaseBdev3", 00:09:36.837 "uuid": "cd14c663-8671-4c82-91c0-53aa2af5e3e5", 00:09:36.837 "is_configured": true, 00:09:36.837 "data_offset": 0, 00:09:36.837 "data_size": 65536 00:09:36.837 } 00:09:36.837 ] 00:09:36.837 }' 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.837 14:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.405 14:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:37.405 14:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.405 14:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.405 [2024-12-09 14:42:15.336873] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:37.405 14:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.405 14:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:37.405 14:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.405 14:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.405 14:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.405 14:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.405 14:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.405 14:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.405 14:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.405 14:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.405 14:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.406 14:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.406 14:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.406 14:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.406 14:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.406 14:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.406 14:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.406 "name": "Existed_Raid", 00:09:37.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.406 "strip_size_kb": 0, 00:09:37.406 "state": "configuring", 00:09:37.406 "raid_level": "raid1", 00:09:37.406 "superblock": false, 00:09:37.406 "num_base_bdevs": 3, 00:09:37.406 "num_base_bdevs_discovered": 1, 00:09:37.406 "num_base_bdevs_operational": 3, 00:09:37.406 "base_bdevs_list": [ 00:09:37.406 { 00:09:37.406 "name": "BaseBdev1", 00:09:37.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.406 "is_configured": false, 00:09:37.406 "data_offset": 0, 00:09:37.406 "data_size": 0 00:09:37.406 }, 00:09:37.406 { 00:09:37.406 "name": null, 00:09:37.406 "uuid": "b41843e5-02d5-4935-b227-dea3e25e4ec4", 00:09:37.406 "is_configured": false, 00:09:37.406 "data_offset": 0, 00:09:37.406 "data_size": 65536 00:09:37.406 }, 00:09:37.406 { 00:09:37.406 "name": "BaseBdev3", 00:09:37.406 "uuid": "cd14c663-8671-4c82-91c0-53aa2af5e3e5", 00:09:37.406 "is_configured": true, 00:09:37.406 "data_offset": 0, 00:09:37.406 "data_size": 65536 00:09:37.406 } 00:09:37.406 ] 00:09:37.406 }' 00:09:37.406 14:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.406 14:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.973 14:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.973 14:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.973 14:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.973 14:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:37.973 14:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.973 14:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:37.973 14:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:37.973 14:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.973 14:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.973 [2024-12-09 14:42:15.914284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:37.973 BaseBdev1 00:09:37.973 14:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.973 14:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:37.973 14:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:37.973 14:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:37.973 14:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:37.973 14:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:37.973 14:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:37.973 14:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:37.973 14:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.973 14:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.973 14:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.973 14:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:37.973 14:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.973 14:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.973 [ 00:09:37.973 { 00:09:37.973 "name": "BaseBdev1", 00:09:37.973 "aliases": [ 00:09:37.973 "a4954978-520e-4858-bbf5-8996272d8545" 00:09:37.973 ], 00:09:37.973 "product_name": "Malloc disk", 00:09:37.973 "block_size": 512, 00:09:37.973 "num_blocks": 65536, 00:09:37.973 "uuid": "a4954978-520e-4858-bbf5-8996272d8545", 00:09:37.973 "assigned_rate_limits": { 00:09:37.973 "rw_ios_per_sec": 0, 00:09:37.973 "rw_mbytes_per_sec": 0, 00:09:37.973 "r_mbytes_per_sec": 0, 00:09:37.973 "w_mbytes_per_sec": 0 00:09:37.973 }, 00:09:37.973 "claimed": true, 00:09:37.973 "claim_type": "exclusive_write", 00:09:37.973 "zoned": false, 00:09:37.973 "supported_io_types": { 00:09:37.973 "read": true, 00:09:37.973 "write": true, 00:09:37.973 "unmap": true, 00:09:37.973 "flush": true, 00:09:37.973 "reset": true, 00:09:37.973 "nvme_admin": false, 00:09:37.973 "nvme_io": false, 00:09:37.973 "nvme_io_md": false, 00:09:37.973 "write_zeroes": true, 00:09:37.973 "zcopy": true, 00:09:37.973 "get_zone_info": false, 00:09:37.973 "zone_management": false, 00:09:37.973 "zone_append": false, 00:09:37.973 "compare": false, 00:09:37.973 "compare_and_write": false, 00:09:37.973 "abort": true, 00:09:37.973 "seek_hole": false, 00:09:37.973 "seek_data": false, 00:09:37.973 "copy": true, 00:09:37.973 "nvme_iov_md": false 00:09:37.973 }, 00:09:37.973 "memory_domains": [ 00:09:37.973 { 00:09:37.973 "dma_device_id": "system", 00:09:37.973 "dma_device_type": 1 00:09:37.973 }, 00:09:37.973 { 00:09:37.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.973 "dma_device_type": 2 00:09:37.973 } 00:09:37.973 ], 00:09:37.973 "driver_specific": {} 00:09:37.973 } 00:09:37.973 ] 00:09:37.973 14:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.973 14:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:37.973 14:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:37.973 14:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.973 14:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.973 14:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.973 14:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.973 14:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.973 14:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.973 14:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.973 14:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.973 14:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.973 14:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.973 14:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.973 14:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.973 14:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.973 14:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.973 14:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.973 "name": "Existed_Raid", 00:09:37.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.974 "strip_size_kb": 0, 00:09:37.974 "state": "configuring", 00:09:37.974 "raid_level": "raid1", 00:09:37.974 "superblock": false, 00:09:37.974 "num_base_bdevs": 3, 00:09:37.974 "num_base_bdevs_discovered": 2, 00:09:37.974 "num_base_bdevs_operational": 3, 00:09:37.974 "base_bdevs_list": [ 00:09:37.974 { 00:09:37.974 "name": "BaseBdev1", 00:09:37.974 "uuid": "a4954978-520e-4858-bbf5-8996272d8545", 00:09:37.974 "is_configured": true, 00:09:37.974 "data_offset": 0, 00:09:37.974 "data_size": 65536 00:09:37.974 }, 00:09:37.974 { 00:09:37.974 "name": null, 00:09:37.974 "uuid": "b41843e5-02d5-4935-b227-dea3e25e4ec4", 00:09:37.974 "is_configured": false, 00:09:37.974 "data_offset": 0, 00:09:37.974 "data_size": 65536 00:09:37.974 }, 00:09:37.974 { 00:09:37.974 "name": "BaseBdev3", 00:09:37.974 "uuid": "cd14c663-8671-4c82-91c0-53aa2af5e3e5", 00:09:37.974 "is_configured": true, 00:09:37.974 "data_offset": 0, 00:09:37.974 "data_size": 65536 00:09:37.974 } 00:09:37.974 ] 00:09:37.974 }' 00:09:37.974 14:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.974 14:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.233 14:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.233 14:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:38.233 14:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.233 14:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.233 14:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.493 14:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:38.493 14:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:38.493 14:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.493 14:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.493 [2024-12-09 14:42:16.377565] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:38.493 14:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.493 14:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:38.493 14:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.493 14:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.493 14:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.493 14:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.493 14:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.493 14:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.493 14:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.493 14:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.493 14:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.493 14:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.493 14:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.493 14:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.493 14:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.493 14:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.493 14:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.493 "name": "Existed_Raid", 00:09:38.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.493 "strip_size_kb": 0, 00:09:38.493 "state": "configuring", 00:09:38.493 "raid_level": "raid1", 00:09:38.493 "superblock": false, 00:09:38.493 "num_base_bdevs": 3, 00:09:38.493 "num_base_bdevs_discovered": 1, 00:09:38.493 "num_base_bdevs_operational": 3, 00:09:38.493 "base_bdevs_list": [ 00:09:38.494 { 00:09:38.494 "name": "BaseBdev1", 00:09:38.494 "uuid": "a4954978-520e-4858-bbf5-8996272d8545", 00:09:38.494 "is_configured": true, 00:09:38.494 "data_offset": 0, 00:09:38.494 "data_size": 65536 00:09:38.494 }, 00:09:38.494 { 00:09:38.494 "name": null, 00:09:38.494 "uuid": "b41843e5-02d5-4935-b227-dea3e25e4ec4", 00:09:38.494 "is_configured": false, 00:09:38.494 "data_offset": 0, 00:09:38.494 "data_size": 65536 00:09:38.494 }, 00:09:38.494 { 00:09:38.494 "name": null, 00:09:38.494 "uuid": "cd14c663-8671-4c82-91c0-53aa2af5e3e5", 00:09:38.494 "is_configured": false, 00:09:38.494 "data_offset": 0, 00:09:38.494 "data_size": 65536 00:09:38.494 } 00:09:38.494 ] 00:09:38.494 }' 00:09:38.494 14:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.494 14:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.761 14:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.761 14:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:38.761 14:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.761 14:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.762 14:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.034 14:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:39.035 14:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:39.035 14:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.035 14:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.035 [2024-12-09 14:42:16.908702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:39.035 14:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.035 14:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:39.035 14:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.035 14:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.035 14:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.035 14:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.035 14:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.035 14:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.035 14:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.035 14:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.035 14:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.035 14:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.035 14:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.035 14:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.035 14:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.035 14:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.035 14:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.035 "name": "Existed_Raid", 00:09:39.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.035 "strip_size_kb": 0, 00:09:39.035 "state": "configuring", 00:09:39.035 "raid_level": "raid1", 00:09:39.035 "superblock": false, 00:09:39.035 "num_base_bdevs": 3, 00:09:39.035 "num_base_bdevs_discovered": 2, 00:09:39.035 "num_base_bdevs_operational": 3, 00:09:39.035 "base_bdevs_list": [ 00:09:39.035 { 00:09:39.035 "name": "BaseBdev1", 00:09:39.035 "uuid": "a4954978-520e-4858-bbf5-8996272d8545", 00:09:39.035 "is_configured": true, 00:09:39.035 "data_offset": 0, 00:09:39.035 "data_size": 65536 00:09:39.035 }, 00:09:39.035 { 00:09:39.035 "name": null, 00:09:39.035 "uuid": "b41843e5-02d5-4935-b227-dea3e25e4ec4", 00:09:39.035 "is_configured": false, 00:09:39.035 "data_offset": 0, 00:09:39.035 "data_size": 65536 00:09:39.035 }, 00:09:39.035 { 00:09:39.035 "name": "BaseBdev3", 00:09:39.035 "uuid": "cd14c663-8671-4c82-91c0-53aa2af5e3e5", 00:09:39.035 "is_configured": true, 00:09:39.035 "data_offset": 0, 00:09:39.035 "data_size": 65536 00:09:39.035 } 00:09:39.035 ] 00:09:39.035 }' 00:09:39.035 14:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.035 14:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.293 14:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.293 14:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:39.293 14:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.294 14:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.294 14:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.553 14:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:39.553 14:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:39.553 14:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.553 14:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.553 [2024-12-09 14:42:17.427831] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:39.553 14:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.553 14:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:39.553 14:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.553 14:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.553 14:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.553 14:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.553 14:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.553 14:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.553 14:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.553 14:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.553 14:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.553 14:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.553 14:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.553 14:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.553 14:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.553 14:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.553 14:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.553 "name": "Existed_Raid", 00:09:39.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.553 "strip_size_kb": 0, 00:09:39.553 "state": "configuring", 00:09:39.553 "raid_level": "raid1", 00:09:39.553 "superblock": false, 00:09:39.553 "num_base_bdevs": 3, 00:09:39.553 "num_base_bdevs_discovered": 1, 00:09:39.553 "num_base_bdevs_operational": 3, 00:09:39.553 "base_bdevs_list": [ 00:09:39.553 { 00:09:39.553 "name": null, 00:09:39.553 "uuid": "a4954978-520e-4858-bbf5-8996272d8545", 00:09:39.553 "is_configured": false, 00:09:39.553 "data_offset": 0, 00:09:39.553 "data_size": 65536 00:09:39.553 }, 00:09:39.553 { 00:09:39.553 "name": null, 00:09:39.553 "uuid": "b41843e5-02d5-4935-b227-dea3e25e4ec4", 00:09:39.553 "is_configured": false, 00:09:39.553 "data_offset": 0, 00:09:39.553 "data_size": 65536 00:09:39.553 }, 00:09:39.553 { 00:09:39.553 "name": "BaseBdev3", 00:09:39.553 "uuid": "cd14c663-8671-4c82-91c0-53aa2af5e3e5", 00:09:39.553 "is_configured": true, 00:09:39.553 "data_offset": 0, 00:09:39.553 "data_size": 65536 00:09:39.553 } 00:09:39.553 ] 00:09:39.553 }' 00:09:39.553 14:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.553 14:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.120 14:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:40.120 14:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.120 14:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.120 14:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.120 14:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.120 14:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:40.120 14:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:40.120 14:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.120 14:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.120 [2024-12-09 14:42:18.035225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:40.120 14:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.120 14:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:40.120 14:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.120 14:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.120 14:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:40.120 14:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:40.120 14:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.120 14:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.120 14:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.120 14:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.120 14:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.120 14:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.120 14:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.120 14:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.120 14:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.120 14:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.120 14:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.120 "name": "Existed_Raid", 00:09:40.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.120 "strip_size_kb": 0, 00:09:40.120 "state": "configuring", 00:09:40.120 "raid_level": "raid1", 00:09:40.120 "superblock": false, 00:09:40.120 "num_base_bdevs": 3, 00:09:40.120 "num_base_bdevs_discovered": 2, 00:09:40.120 "num_base_bdevs_operational": 3, 00:09:40.120 "base_bdevs_list": [ 00:09:40.120 { 00:09:40.120 "name": null, 00:09:40.120 "uuid": "a4954978-520e-4858-bbf5-8996272d8545", 00:09:40.120 "is_configured": false, 00:09:40.120 "data_offset": 0, 00:09:40.120 "data_size": 65536 00:09:40.120 }, 00:09:40.120 { 00:09:40.120 "name": "BaseBdev2", 00:09:40.120 "uuid": "b41843e5-02d5-4935-b227-dea3e25e4ec4", 00:09:40.120 "is_configured": true, 00:09:40.120 "data_offset": 0, 00:09:40.120 "data_size": 65536 00:09:40.120 }, 00:09:40.120 { 00:09:40.120 "name": "BaseBdev3", 00:09:40.120 "uuid": "cd14c663-8671-4c82-91c0-53aa2af5e3e5", 00:09:40.120 "is_configured": true, 00:09:40.120 "data_offset": 0, 00:09:40.120 "data_size": 65536 00:09:40.120 } 00:09:40.120 ] 00:09:40.120 }' 00:09:40.120 14:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.120 14:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.379 14:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.379 14:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:40.379 14:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.379 14:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.638 14:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.638 14:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:40.638 14:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:40.638 14:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.638 14:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.638 14:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.638 14:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.638 14:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a4954978-520e-4858-bbf5-8996272d8545 00:09:40.638 14:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.638 14:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.638 [2024-12-09 14:42:18.639951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:40.638 [2024-12-09 14:42:18.640082] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:40.638 [2024-12-09 14:42:18.640112] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:40.638 [2024-12-09 14:42:18.640455] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:40.638 [2024-12-09 14:42:18.640702] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:40.638 [2024-12-09 14:42:18.640754] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:40.638 [2024-12-09 14:42:18.641101] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.638 NewBaseBdev 00:09:40.638 14:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.638 14:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:40.638 14:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:40.638 14:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:40.638 14:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:40.638 14:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:40.638 14:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:40.638 14:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:40.638 14:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.638 14:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.638 14:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.638 14:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:40.638 14:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.638 14:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.638 [ 00:09:40.638 { 00:09:40.638 "name": "NewBaseBdev", 00:09:40.638 "aliases": [ 00:09:40.638 "a4954978-520e-4858-bbf5-8996272d8545" 00:09:40.638 ], 00:09:40.638 "product_name": "Malloc disk", 00:09:40.638 "block_size": 512, 00:09:40.638 "num_blocks": 65536, 00:09:40.638 "uuid": "a4954978-520e-4858-bbf5-8996272d8545", 00:09:40.638 "assigned_rate_limits": { 00:09:40.638 "rw_ios_per_sec": 0, 00:09:40.638 "rw_mbytes_per_sec": 0, 00:09:40.638 "r_mbytes_per_sec": 0, 00:09:40.638 "w_mbytes_per_sec": 0 00:09:40.638 }, 00:09:40.638 "claimed": true, 00:09:40.638 "claim_type": "exclusive_write", 00:09:40.638 "zoned": false, 00:09:40.638 "supported_io_types": { 00:09:40.638 "read": true, 00:09:40.638 "write": true, 00:09:40.638 "unmap": true, 00:09:40.638 "flush": true, 00:09:40.638 "reset": true, 00:09:40.638 "nvme_admin": false, 00:09:40.638 "nvme_io": false, 00:09:40.638 "nvme_io_md": false, 00:09:40.638 "write_zeroes": true, 00:09:40.638 "zcopy": true, 00:09:40.638 "get_zone_info": false, 00:09:40.638 "zone_management": false, 00:09:40.638 "zone_append": false, 00:09:40.638 "compare": false, 00:09:40.638 "compare_and_write": false, 00:09:40.638 "abort": true, 00:09:40.638 "seek_hole": false, 00:09:40.638 "seek_data": false, 00:09:40.638 "copy": true, 00:09:40.638 "nvme_iov_md": false 00:09:40.638 }, 00:09:40.638 "memory_domains": [ 00:09:40.638 { 00:09:40.638 "dma_device_id": "system", 00:09:40.638 "dma_device_type": 1 00:09:40.638 }, 00:09:40.638 { 00:09:40.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.638 "dma_device_type": 2 00:09:40.638 } 00:09:40.638 ], 00:09:40.638 "driver_specific": {} 00:09:40.638 } 00:09:40.638 ] 00:09:40.638 14:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.638 14:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:40.638 14:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:40.638 14:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.638 14:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:40.638 14:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:40.638 14:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:40.638 14:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.638 14:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.638 14:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.638 14:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.638 14:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.638 14:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.638 14:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.638 14:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.638 14:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.639 14:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.639 14:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.639 "name": "Existed_Raid", 00:09:40.639 "uuid": "9e99e8e3-8fd6-4a7c-a868-cce600460ada", 00:09:40.639 "strip_size_kb": 0, 00:09:40.639 "state": "online", 00:09:40.639 "raid_level": "raid1", 00:09:40.639 "superblock": false, 00:09:40.639 "num_base_bdevs": 3, 00:09:40.639 "num_base_bdevs_discovered": 3, 00:09:40.639 "num_base_bdevs_operational": 3, 00:09:40.639 "base_bdevs_list": [ 00:09:40.639 { 00:09:40.639 "name": "NewBaseBdev", 00:09:40.639 "uuid": "a4954978-520e-4858-bbf5-8996272d8545", 00:09:40.639 "is_configured": true, 00:09:40.639 "data_offset": 0, 00:09:40.639 "data_size": 65536 00:09:40.639 }, 00:09:40.639 { 00:09:40.639 "name": "BaseBdev2", 00:09:40.639 "uuid": "b41843e5-02d5-4935-b227-dea3e25e4ec4", 00:09:40.639 "is_configured": true, 00:09:40.639 "data_offset": 0, 00:09:40.639 "data_size": 65536 00:09:40.639 }, 00:09:40.639 { 00:09:40.639 "name": "BaseBdev3", 00:09:40.639 "uuid": "cd14c663-8671-4c82-91c0-53aa2af5e3e5", 00:09:40.639 "is_configured": true, 00:09:40.639 "data_offset": 0, 00:09:40.639 "data_size": 65536 00:09:40.639 } 00:09:40.639 ] 00:09:40.639 }' 00:09:40.639 14:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.639 14:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.208 14:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:41.208 14:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:41.208 14:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:41.208 14:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:41.208 14:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:41.208 14:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:41.208 14:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:41.208 14:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:41.208 14:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.208 14:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.208 [2024-12-09 14:42:19.151471] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:41.208 14:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.208 14:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:41.208 "name": "Existed_Raid", 00:09:41.208 "aliases": [ 00:09:41.208 "9e99e8e3-8fd6-4a7c-a868-cce600460ada" 00:09:41.208 ], 00:09:41.208 "product_name": "Raid Volume", 00:09:41.208 "block_size": 512, 00:09:41.208 "num_blocks": 65536, 00:09:41.208 "uuid": "9e99e8e3-8fd6-4a7c-a868-cce600460ada", 00:09:41.208 "assigned_rate_limits": { 00:09:41.208 "rw_ios_per_sec": 0, 00:09:41.208 "rw_mbytes_per_sec": 0, 00:09:41.208 "r_mbytes_per_sec": 0, 00:09:41.208 "w_mbytes_per_sec": 0 00:09:41.208 }, 00:09:41.208 "claimed": false, 00:09:41.208 "zoned": false, 00:09:41.208 "supported_io_types": { 00:09:41.208 "read": true, 00:09:41.208 "write": true, 00:09:41.208 "unmap": false, 00:09:41.208 "flush": false, 00:09:41.208 "reset": true, 00:09:41.208 "nvme_admin": false, 00:09:41.208 "nvme_io": false, 00:09:41.208 "nvme_io_md": false, 00:09:41.208 "write_zeroes": true, 00:09:41.208 "zcopy": false, 00:09:41.208 "get_zone_info": false, 00:09:41.208 "zone_management": false, 00:09:41.208 "zone_append": false, 00:09:41.208 "compare": false, 00:09:41.208 "compare_and_write": false, 00:09:41.208 "abort": false, 00:09:41.208 "seek_hole": false, 00:09:41.208 "seek_data": false, 00:09:41.208 "copy": false, 00:09:41.208 "nvme_iov_md": false 00:09:41.208 }, 00:09:41.208 "memory_domains": [ 00:09:41.208 { 00:09:41.208 "dma_device_id": "system", 00:09:41.208 "dma_device_type": 1 00:09:41.208 }, 00:09:41.208 { 00:09:41.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.208 "dma_device_type": 2 00:09:41.208 }, 00:09:41.208 { 00:09:41.208 "dma_device_id": "system", 00:09:41.208 "dma_device_type": 1 00:09:41.208 }, 00:09:41.208 { 00:09:41.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.208 "dma_device_type": 2 00:09:41.208 }, 00:09:41.208 { 00:09:41.208 "dma_device_id": "system", 00:09:41.208 "dma_device_type": 1 00:09:41.208 }, 00:09:41.208 { 00:09:41.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.208 "dma_device_type": 2 00:09:41.208 } 00:09:41.208 ], 00:09:41.208 "driver_specific": { 00:09:41.208 "raid": { 00:09:41.208 "uuid": "9e99e8e3-8fd6-4a7c-a868-cce600460ada", 00:09:41.208 "strip_size_kb": 0, 00:09:41.208 "state": "online", 00:09:41.208 "raid_level": "raid1", 00:09:41.208 "superblock": false, 00:09:41.208 "num_base_bdevs": 3, 00:09:41.208 "num_base_bdevs_discovered": 3, 00:09:41.208 "num_base_bdevs_operational": 3, 00:09:41.208 "base_bdevs_list": [ 00:09:41.208 { 00:09:41.209 "name": "NewBaseBdev", 00:09:41.209 "uuid": "a4954978-520e-4858-bbf5-8996272d8545", 00:09:41.209 "is_configured": true, 00:09:41.209 "data_offset": 0, 00:09:41.209 "data_size": 65536 00:09:41.209 }, 00:09:41.209 { 00:09:41.209 "name": "BaseBdev2", 00:09:41.209 "uuid": "b41843e5-02d5-4935-b227-dea3e25e4ec4", 00:09:41.209 "is_configured": true, 00:09:41.209 "data_offset": 0, 00:09:41.209 "data_size": 65536 00:09:41.209 }, 00:09:41.209 { 00:09:41.209 "name": "BaseBdev3", 00:09:41.209 "uuid": "cd14c663-8671-4c82-91c0-53aa2af5e3e5", 00:09:41.209 "is_configured": true, 00:09:41.209 "data_offset": 0, 00:09:41.209 "data_size": 65536 00:09:41.209 } 00:09:41.209 ] 00:09:41.209 } 00:09:41.209 } 00:09:41.209 }' 00:09:41.209 14:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:41.209 14:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:41.209 BaseBdev2 00:09:41.209 BaseBdev3' 00:09:41.209 14:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.209 14:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:41.209 14:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.209 14:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.209 14:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:41.209 14:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.209 14:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.209 14:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.209 14:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.209 14:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.209 14:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.209 14:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.209 14:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:41.209 14:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.209 14:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.468 14:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.468 14:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.468 14:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.468 14:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.468 14:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:41.468 14:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.468 14:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.468 14:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.468 14:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.468 14:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.468 14:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.468 14:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:41.468 14:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.468 14:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.468 [2024-12-09 14:42:19.414767] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:41.468 [2024-12-09 14:42:19.414805] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:41.468 [2024-12-09 14:42:19.414899] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:41.468 [2024-12-09 14:42:19.415215] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:41.468 [2024-12-09 14:42:19.415227] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:41.468 14:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.468 14:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 68670 00:09:41.468 14:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 68670 ']' 00:09:41.468 14:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 68670 00:09:41.468 14:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:41.468 14:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:41.468 14:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68670 00:09:41.468 14:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:41.468 14:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:41.468 14:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68670' 00:09:41.468 killing process with pid 68670 00:09:41.468 14:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 68670 00:09:41.468 [2024-12-09 14:42:19.449220] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:41.468 14:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 68670 00:09:41.728 [2024-12-09 14:42:19.764120] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:43.106 14:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:43.106 ************************************ 00:09:43.106 END TEST raid_state_function_test 00:09:43.106 00:09:43.106 real 0m10.854s 00:09:43.106 user 0m17.324s 00:09:43.106 sys 0m1.800s 00:09:43.106 14:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.106 14:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.106 ************************************ 00:09:43.106 14:42:20 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:43.106 14:42:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:43.106 14:42:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.106 14:42:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:43.106 ************************************ 00:09:43.106 START TEST raid_state_function_test_sb 00:09:43.106 ************************************ 00:09:43.106 14:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:09:43.106 14:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:43.106 14:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:43.106 14:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:43.106 14:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:43.106 14:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:43.106 14:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:43.106 14:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:43.106 14:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:43.106 14:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:43.106 14:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:43.106 14:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:43.106 14:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:43.106 14:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:43.106 14:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:43.106 14:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:43.106 14:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:43.106 14:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:43.106 14:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:43.106 14:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:43.106 14:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:43.106 14:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:43.106 14:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:43.106 14:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:43.106 14:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:43.106 14:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:43.106 14:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:43.106 14:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=69297 00:09:43.106 Process raid pid: 69297 00:09:43.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.107 14:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69297' 00:09:43.107 14:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 69297 00:09:43.107 14:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 69297 ']' 00:09:43.107 14:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.107 14:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:43.107 14:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.107 14:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:43.107 14:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.107 [2024-12-09 14:42:21.085007] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:09:43.107 [2024-12-09 14:42:21.085226] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.366 [2024-12-09 14:42:21.262096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.366 [2024-12-09 14:42:21.379871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.626 [2024-12-09 14:42:21.594249] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.626 [2024-12-09 14:42:21.594382] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.886 14:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.886 14:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:43.886 14:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:43.886 14:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.886 14:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.886 [2024-12-09 14:42:21.942966] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:43.886 [2024-12-09 14:42:21.943101] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:43.886 [2024-12-09 14:42:21.943145] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:43.886 [2024-12-09 14:42:21.943174] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:43.886 [2024-12-09 14:42:21.943195] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:43.886 [2024-12-09 14:42:21.943220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:43.886 14:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.886 14:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:43.886 14:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.886 14:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.886 14:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.886 14:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.886 14:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.886 14:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.886 14:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.886 14:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.886 14:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.886 14:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.886 14:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.886 14:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.886 14:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.886 14:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.886 14:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.886 "name": "Existed_Raid", 00:09:43.886 "uuid": "114c360a-67a0-493b-bb0f-6c3017cbdae2", 00:09:43.886 "strip_size_kb": 0, 00:09:43.886 "state": "configuring", 00:09:43.886 "raid_level": "raid1", 00:09:43.886 "superblock": true, 00:09:43.886 "num_base_bdevs": 3, 00:09:43.886 "num_base_bdevs_discovered": 0, 00:09:43.886 "num_base_bdevs_operational": 3, 00:09:43.886 "base_bdevs_list": [ 00:09:43.886 { 00:09:43.886 "name": "BaseBdev1", 00:09:43.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.886 "is_configured": false, 00:09:43.886 "data_offset": 0, 00:09:43.886 "data_size": 0 00:09:43.886 }, 00:09:43.886 { 00:09:43.886 "name": "BaseBdev2", 00:09:43.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.886 "is_configured": false, 00:09:43.886 "data_offset": 0, 00:09:43.886 "data_size": 0 00:09:43.886 }, 00:09:43.886 { 00:09:43.886 "name": "BaseBdev3", 00:09:43.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.886 "is_configured": false, 00:09:43.886 "data_offset": 0, 00:09:43.886 "data_size": 0 00:09:43.886 } 00:09:43.886 ] 00:09:43.886 }' 00:09:43.886 14:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.886 14:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.455 14:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:44.455 14:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.455 14:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.455 [2024-12-09 14:42:22.406191] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:44.455 [2024-12-09 14:42:22.406231] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:44.455 14:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.455 14:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:44.455 14:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.455 14:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.455 [2024-12-09 14:42:22.414169] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:44.455 [2024-12-09 14:42:22.414216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:44.455 [2024-12-09 14:42:22.414225] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:44.455 [2024-12-09 14:42:22.414235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:44.455 [2024-12-09 14:42:22.414241] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:44.455 [2024-12-09 14:42:22.414251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:44.455 14:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.455 14:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:44.455 14:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.455 14:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.455 [2024-12-09 14:42:22.460459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:44.455 BaseBdev1 00:09:44.455 14:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.455 14:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:44.455 14:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:44.455 14:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:44.455 14:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:44.456 14:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:44.456 14:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:44.456 14:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:44.456 14:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.456 14:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.456 14:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.456 14:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:44.456 14:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.456 14:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.456 [ 00:09:44.456 { 00:09:44.456 "name": "BaseBdev1", 00:09:44.456 "aliases": [ 00:09:44.456 "d7b7d1a7-fdc6-4ef6-9d07-7c00b05cf9bc" 00:09:44.456 ], 00:09:44.456 "product_name": "Malloc disk", 00:09:44.456 "block_size": 512, 00:09:44.456 "num_blocks": 65536, 00:09:44.456 "uuid": "d7b7d1a7-fdc6-4ef6-9d07-7c00b05cf9bc", 00:09:44.456 "assigned_rate_limits": { 00:09:44.456 "rw_ios_per_sec": 0, 00:09:44.456 "rw_mbytes_per_sec": 0, 00:09:44.456 "r_mbytes_per_sec": 0, 00:09:44.456 "w_mbytes_per_sec": 0 00:09:44.456 }, 00:09:44.456 "claimed": true, 00:09:44.456 "claim_type": "exclusive_write", 00:09:44.456 "zoned": false, 00:09:44.456 "supported_io_types": { 00:09:44.456 "read": true, 00:09:44.456 "write": true, 00:09:44.456 "unmap": true, 00:09:44.456 "flush": true, 00:09:44.456 "reset": true, 00:09:44.456 "nvme_admin": false, 00:09:44.456 "nvme_io": false, 00:09:44.456 "nvme_io_md": false, 00:09:44.456 "write_zeroes": true, 00:09:44.456 "zcopy": true, 00:09:44.456 "get_zone_info": false, 00:09:44.456 "zone_management": false, 00:09:44.456 "zone_append": false, 00:09:44.456 "compare": false, 00:09:44.456 "compare_and_write": false, 00:09:44.456 "abort": true, 00:09:44.456 "seek_hole": false, 00:09:44.456 "seek_data": false, 00:09:44.456 "copy": true, 00:09:44.456 "nvme_iov_md": false 00:09:44.456 }, 00:09:44.456 "memory_domains": [ 00:09:44.456 { 00:09:44.456 "dma_device_id": "system", 00:09:44.456 "dma_device_type": 1 00:09:44.456 }, 00:09:44.456 { 00:09:44.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.456 "dma_device_type": 2 00:09:44.456 } 00:09:44.456 ], 00:09:44.456 "driver_specific": {} 00:09:44.456 } 00:09:44.456 ] 00:09:44.456 14:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.456 14:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:44.456 14:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:44.456 14:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.456 14:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.456 14:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:44.456 14:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:44.456 14:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.456 14:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.456 14:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.456 14:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.456 14:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.456 14:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.456 14:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.456 14:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.456 14:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.456 14:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.456 14:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.456 "name": "Existed_Raid", 00:09:44.456 "uuid": "502a6387-41b9-40b0-9b30-7930831b9733", 00:09:44.456 "strip_size_kb": 0, 00:09:44.456 "state": "configuring", 00:09:44.456 "raid_level": "raid1", 00:09:44.456 "superblock": true, 00:09:44.456 "num_base_bdevs": 3, 00:09:44.456 "num_base_bdevs_discovered": 1, 00:09:44.456 "num_base_bdevs_operational": 3, 00:09:44.456 "base_bdevs_list": [ 00:09:44.456 { 00:09:44.456 "name": "BaseBdev1", 00:09:44.456 "uuid": "d7b7d1a7-fdc6-4ef6-9d07-7c00b05cf9bc", 00:09:44.456 "is_configured": true, 00:09:44.456 "data_offset": 2048, 00:09:44.456 "data_size": 63488 00:09:44.456 }, 00:09:44.456 { 00:09:44.456 "name": "BaseBdev2", 00:09:44.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.456 "is_configured": false, 00:09:44.456 "data_offset": 0, 00:09:44.456 "data_size": 0 00:09:44.456 }, 00:09:44.456 { 00:09:44.456 "name": "BaseBdev3", 00:09:44.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.456 "is_configured": false, 00:09:44.456 "data_offset": 0, 00:09:44.456 "data_size": 0 00:09:44.456 } 00:09:44.456 ] 00:09:44.456 }' 00:09:44.456 14:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.456 14:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.024 14:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:45.024 14:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.024 14:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.024 [2024-12-09 14:42:22.947711] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:45.024 [2024-12-09 14:42:22.947855] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:45.024 14:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.024 14:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:45.024 14:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.024 14:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.024 [2024-12-09 14:42:22.959745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:45.024 [2024-12-09 14:42:22.961679] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:45.024 [2024-12-09 14:42:22.961724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:45.024 [2024-12-09 14:42:22.961734] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:45.024 [2024-12-09 14:42:22.961743] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:45.024 14:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.024 14:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:45.024 14:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:45.024 14:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:45.024 14:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.024 14:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.024 14:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.024 14:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.024 14:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.024 14:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.024 14:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.024 14:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.024 14:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.024 14:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.024 14:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.024 14:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.024 14:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.024 14:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.024 14:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.024 "name": "Existed_Raid", 00:09:45.024 "uuid": "221d73e4-c8ee-4548-8d9a-e640934d43da", 00:09:45.024 "strip_size_kb": 0, 00:09:45.024 "state": "configuring", 00:09:45.024 "raid_level": "raid1", 00:09:45.024 "superblock": true, 00:09:45.024 "num_base_bdevs": 3, 00:09:45.024 "num_base_bdevs_discovered": 1, 00:09:45.024 "num_base_bdevs_operational": 3, 00:09:45.024 "base_bdevs_list": [ 00:09:45.024 { 00:09:45.024 "name": "BaseBdev1", 00:09:45.024 "uuid": "d7b7d1a7-fdc6-4ef6-9d07-7c00b05cf9bc", 00:09:45.024 "is_configured": true, 00:09:45.024 "data_offset": 2048, 00:09:45.024 "data_size": 63488 00:09:45.024 }, 00:09:45.024 { 00:09:45.024 "name": "BaseBdev2", 00:09:45.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.024 "is_configured": false, 00:09:45.024 "data_offset": 0, 00:09:45.024 "data_size": 0 00:09:45.024 }, 00:09:45.024 { 00:09:45.024 "name": "BaseBdev3", 00:09:45.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.024 "is_configured": false, 00:09:45.024 "data_offset": 0, 00:09:45.024 "data_size": 0 00:09:45.024 } 00:09:45.024 ] 00:09:45.024 }' 00:09:45.024 14:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.024 14:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.594 14:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:45.594 14:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.594 14:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.594 [2024-12-09 14:42:23.455411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:45.594 BaseBdev2 00:09:45.594 14:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.594 14:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:45.594 14:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:45.594 14:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:45.594 14:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:45.594 14:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:45.594 14:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:45.594 14:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:45.594 14:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.594 14:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.594 14:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.594 14:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:45.594 14:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.594 14:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.594 [ 00:09:45.594 { 00:09:45.594 "name": "BaseBdev2", 00:09:45.594 "aliases": [ 00:09:45.594 "b1e8dbd0-a4bf-499e-ac51-5f1d8df09202" 00:09:45.594 ], 00:09:45.594 "product_name": "Malloc disk", 00:09:45.594 "block_size": 512, 00:09:45.594 "num_blocks": 65536, 00:09:45.594 "uuid": "b1e8dbd0-a4bf-499e-ac51-5f1d8df09202", 00:09:45.594 "assigned_rate_limits": { 00:09:45.594 "rw_ios_per_sec": 0, 00:09:45.594 "rw_mbytes_per_sec": 0, 00:09:45.594 "r_mbytes_per_sec": 0, 00:09:45.594 "w_mbytes_per_sec": 0 00:09:45.594 }, 00:09:45.594 "claimed": true, 00:09:45.594 "claim_type": "exclusive_write", 00:09:45.594 "zoned": false, 00:09:45.594 "supported_io_types": { 00:09:45.594 "read": true, 00:09:45.594 "write": true, 00:09:45.594 "unmap": true, 00:09:45.594 "flush": true, 00:09:45.594 "reset": true, 00:09:45.594 "nvme_admin": false, 00:09:45.594 "nvme_io": false, 00:09:45.594 "nvme_io_md": false, 00:09:45.594 "write_zeroes": true, 00:09:45.594 "zcopy": true, 00:09:45.594 "get_zone_info": false, 00:09:45.594 "zone_management": false, 00:09:45.594 "zone_append": false, 00:09:45.594 "compare": false, 00:09:45.594 "compare_and_write": false, 00:09:45.594 "abort": true, 00:09:45.594 "seek_hole": false, 00:09:45.594 "seek_data": false, 00:09:45.594 "copy": true, 00:09:45.594 "nvme_iov_md": false 00:09:45.594 }, 00:09:45.594 "memory_domains": [ 00:09:45.594 { 00:09:45.594 "dma_device_id": "system", 00:09:45.594 "dma_device_type": 1 00:09:45.594 }, 00:09:45.594 { 00:09:45.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.594 "dma_device_type": 2 00:09:45.595 } 00:09:45.595 ], 00:09:45.595 "driver_specific": {} 00:09:45.595 } 00:09:45.595 ] 00:09:45.595 14:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.595 14:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:45.595 14:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:45.595 14:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:45.595 14:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:45.595 14:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.595 14:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.595 14:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.595 14:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.595 14:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.595 14:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.595 14:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.595 14:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.595 14:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.595 14:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.595 14:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.595 14:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.595 14:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.595 14:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.595 14:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.595 "name": "Existed_Raid", 00:09:45.595 "uuid": "221d73e4-c8ee-4548-8d9a-e640934d43da", 00:09:45.595 "strip_size_kb": 0, 00:09:45.595 "state": "configuring", 00:09:45.595 "raid_level": "raid1", 00:09:45.595 "superblock": true, 00:09:45.595 "num_base_bdevs": 3, 00:09:45.595 "num_base_bdevs_discovered": 2, 00:09:45.595 "num_base_bdevs_operational": 3, 00:09:45.595 "base_bdevs_list": [ 00:09:45.595 { 00:09:45.595 "name": "BaseBdev1", 00:09:45.595 "uuid": "d7b7d1a7-fdc6-4ef6-9d07-7c00b05cf9bc", 00:09:45.595 "is_configured": true, 00:09:45.595 "data_offset": 2048, 00:09:45.595 "data_size": 63488 00:09:45.595 }, 00:09:45.595 { 00:09:45.595 "name": "BaseBdev2", 00:09:45.595 "uuid": "b1e8dbd0-a4bf-499e-ac51-5f1d8df09202", 00:09:45.595 "is_configured": true, 00:09:45.595 "data_offset": 2048, 00:09:45.595 "data_size": 63488 00:09:45.595 }, 00:09:45.595 { 00:09:45.595 "name": "BaseBdev3", 00:09:45.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.595 "is_configured": false, 00:09:45.595 "data_offset": 0, 00:09:45.595 "data_size": 0 00:09:45.595 } 00:09:45.595 ] 00:09:45.595 }' 00:09:45.595 14:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.595 14:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.861 14:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:45.861 14:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.861 14:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.130 [2024-12-09 14:42:23.981929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:46.130 [2024-12-09 14:42:23.982328] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:46.130 [2024-12-09 14:42:23.982357] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:46.130 [2024-12-09 14:42:23.982707] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:46.130 [2024-12-09 14:42:23.982904] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:46.130 [2024-12-09 14:42:23.982915] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:46.130 BaseBdev3 00:09:46.130 [2024-12-09 14:42:23.983076] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:46.130 14:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.130 14:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:46.130 14:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:46.130 14:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:46.130 14:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:46.130 14:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:46.131 14:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:46.131 14:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:46.131 14:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.131 14:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.131 14:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.131 14:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:46.131 14:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.131 14:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.131 [ 00:09:46.131 { 00:09:46.131 "name": "BaseBdev3", 00:09:46.131 "aliases": [ 00:09:46.131 "8ba11486-5261-405d-9f8f-fd47278dd917" 00:09:46.131 ], 00:09:46.131 "product_name": "Malloc disk", 00:09:46.131 "block_size": 512, 00:09:46.131 "num_blocks": 65536, 00:09:46.131 "uuid": "8ba11486-5261-405d-9f8f-fd47278dd917", 00:09:46.131 "assigned_rate_limits": { 00:09:46.131 "rw_ios_per_sec": 0, 00:09:46.131 "rw_mbytes_per_sec": 0, 00:09:46.131 "r_mbytes_per_sec": 0, 00:09:46.131 "w_mbytes_per_sec": 0 00:09:46.131 }, 00:09:46.131 "claimed": true, 00:09:46.131 "claim_type": "exclusive_write", 00:09:46.131 "zoned": false, 00:09:46.131 "supported_io_types": { 00:09:46.131 "read": true, 00:09:46.131 "write": true, 00:09:46.131 "unmap": true, 00:09:46.131 "flush": true, 00:09:46.131 "reset": true, 00:09:46.131 "nvme_admin": false, 00:09:46.131 "nvme_io": false, 00:09:46.131 "nvme_io_md": false, 00:09:46.131 "write_zeroes": true, 00:09:46.131 "zcopy": true, 00:09:46.131 "get_zone_info": false, 00:09:46.131 "zone_management": false, 00:09:46.131 "zone_append": false, 00:09:46.131 "compare": false, 00:09:46.131 "compare_and_write": false, 00:09:46.131 "abort": true, 00:09:46.131 "seek_hole": false, 00:09:46.131 "seek_data": false, 00:09:46.131 "copy": true, 00:09:46.131 "nvme_iov_md": false 00:09:46.131 }, 00:09:46.131 "memory_domains": [ 00:09:46.131 { 00:09:46.131 "dma_device_id": "system", 00:09:46.131 "dma_device_type": 1 00:09:46.131 }, 00:09:46.131 { 00:09:46.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.131 "dma_device_type": 2 00:09:46.131 } 00:09:46.131 ], 00:09:46.131 "driver_specific": {} 00:09:46.131 } 00:09:46.131 ] 00:09:46.131 14:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.131 14:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:46.131 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:46.131 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:46.131 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:46.131 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.131 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:46.131 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.131 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.131 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.131 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.131 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.131 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.131 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.131 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.131 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.131 14:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.131 14:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.131 14:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.131 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.131 "name": "Existed_Raid", 00:09:46.131 "uuid": "221d73e4-c8ee-4548-8d9a-e640934d43da", 00:09:46.131 "strip_size_kb": 0, 00:09:46.131 "state": "online", 00:09:46.131 "raid_level": "raid1", 00:09:46.131 "superblock": true, 00:09:46.131 "num_base_bdevs": 3, 00:09:46.131 "num_base_bdevs_discovered": 3, 00:09:46.131 "num_base_bdevs_operational": 3, 00:09:46.131 "base_bdevs_list": [ 00:09:46.131 { 00:09:46.131 "name": "BaseBdev1", 00:09:46.131 "uuid": "d7b7d1a7-fdc6-4ef6-9d07-7c00b05cf9bc", 00:09:46.131 "is_configured": true, 00:09:46.131 "data_offset": 2048, 00:09:46.131 "data_size": 63488 00:09:46.131 }, 00:09:46.131 { 00:09:46.131 "name": "BaseBdev2", 00:09:46.131 "uuid": "b1e8dbd0-a4bf-499e-ac51-5f1d8df09202", 00:09:46.131 "is_configured": true, 00:09:46.131 "data_offset": 2048, 00:09:46.131 "data_size": 63488 00:09:46.131 }, 00:09:46.131 { 00:09:46.131 "name": "BaseBdev3", 00:09:46.131 "uuid": "8ba11486-5261-405d-9f8f-fd47278dd917", 00:09:46.131 "is_configured": true, 00:09:46.131 "data_offset": 2048, 00:09:46.131 "data_size": 63488 00:09:46.131 } 00:09:46.131 ] 00:09:46.131 }' 00:09:46.131 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.131 14:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.391 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:46.391 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:46.391 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:46.391 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:46.391 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:46.391 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:46.391 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:46.391 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:46.391 14:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.391 14:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.391 [2024-12-09 14:42:24.465524] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:46.391 14:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.391 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:46.391 "name": "Existed_Raid", 00:09:46.391 "aliases": [ 00:09:46.391 "221d73e4-c8ee-4548-8d9a-e640934d43da" 00:09:46.391 ], 00:09:46.391 "product_name": "Raid Volume", 00:09:46.391 "block_size": 512, 00:09:46.391 "num_blocks": 63488, 00:09:46.391 "uuid": "221d73e4-c8ee-4548-8d9a-e640934d43da", 00:09:46.391 "assigned_rate_limits": { 00:09:46.391 "rw_ios_per_sec": 0, 00:09:46.391 "rw_mbytes_per_sec": 0, 00:09:46.391 "r_mbytes_per_sec": 0, 00:09:46.391 "w_mbytes_per_sec": 0 00:09:46.391 }, 00:09:46.391 "claimed": false, 00:09:46.391 "zoned": false, 00:09:46.391 "supported_io_types": { 00:09:46.391 "read": true, 00:09:46.391 "write": true, 00:09:46.391 "unmap": false, 00:09:46.391 "flush": false, 00:09:46.391 "reset": true, 00:09:46.391 "nvme_admin": false, 00:09:46.391 "nvme_io": false, 00:09:46.391 "nvme_io_md": false, 00:09:46.391 "write_zeroes": true, 00:09:46.391 "zcopy": false, 00:09:46.391 "get_zone_info": false, 00:09:46.391 "zone_management": false, 00:09:46.391 "zone_append": false, 00:09:46.391 "compare": false, 00:09:46.391 "compare_and_write": false, 00:09:46.391 "abort": false, 00:09:46.391 "seek_hole": false, 00:09:46.391 "seek_data": false, 00:09:46.391 "copy": false, 00:09:46.391 "nvme_iov_md": false 00:09:46.391 }, 00:09:46.391 "memory_domains": [ 00:09:46.391 { 00:09:46.391 "dma_device_id": "system", 00:09:46.391 "dma_device_type": 1 00:09:46.391 }, 00:09:46.391 { 00:09:46.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.391 "dma_device_type": 2 00:09:46.392 }, 00:09:46.392 { 00:09:46.392 "dma_device_id": "system", 00:09:46.392 "dma_device_type": 1 00:09:46.392 }, 00:09:46.392 { 00:09:46.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.392 "dma_device_type": 2 00:09:46.392 }, 00:09:46.392 { 00:09:46.392 "dma_device_id": "system", 00:09:46.392 "dma_device_type": 1 00:09:46.392 }, 00:09:46.392 { 00:09:46.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.392 "dma_device_type": 2 00:09:46.392 } 00:09:46.392 ], 00:09:46.392 "driver_specific": { 00:09:46.392 "raid": { 00:09:46.392 "uuid": "221d73e4-c8ee-4548-8d9a-e640934d43da", 00:09:46.392 "strip_size_kb": 0, 00:09:46.392 "state": "online", 00:09:46.392 "raid_level": "raid1", 00:09:46.392 "superblock": true, 00:09:46.392 "num_base_bdevs": 3, 00:09:46.392 "num_base_bdevs_discovered": 3, 00:09:46.392 "num_base_bdevs_operational": 3, 00:09:46.392 "base_bdevs_list": [ 00:09:46.392 { 00:09:46.392 "name": "BaseBdev1", 00:09:46.392 "uuid": "d7b7d1a7-fdc6-4ef6-9d07-7c00b05cf9bc", 00:09:46.392 "is_configured": true, 00:09:46.392 "data_offset": 2048, 00:09:46.392 "data_size": 63488 00:09:46.392 }, 00:09:46.392 { 00:09:46.392 "name": "BaseBdev2", 00:09:46.392 "uuid": "b1e8dbd0-a4bf-499e-ac51-5f1d8df09202", 00:09:46.392 "is_configured": true, 00:09:46.392 "data_offset": 2048, 00:09:46.392 "data_size": 63488 00:09:46.392 }, 00:09:46.392 { 00:09:46.392 "name": "BaseBdev3", 00:09:46.392 "uuid": "8ba11486-5261-405d-9f8f-fd47278dd917", 00:09:46.392 "is_configured": true, 00:09:46.392 "data_offset": 2048, 00:09:46.392 "data_size": 63488 00:09:46.392 } 00:09:46.392 ] 00:09:46.392 } 00:09:46.392 } 00:09:46.392 }' 00:09:46.392 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:46.652 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:46.652 BaseBdev2 00:09:46.652 BaseBdev3' 00:09:46.652 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.652 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:46.652 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.652 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:46.652 14:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.652 14:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.652 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.652 14:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.652 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.652 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.652 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.652 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:46.652 14:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.652 14:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.652 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.652 14:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.652 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.652 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.652 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.652 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.652 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:46.652 14:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.652 14:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.652 14:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.652 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.652 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.652 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:46.652 14:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.652 14:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.652 [2024-12-09 14:42:24.736792] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:46.912 14:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.912 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:46.912 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:46.912 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:46.912 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:46.912 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:46.912 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:46.912 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.912 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:46.912 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.912 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.912 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:46.912 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.912 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.912 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.912 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.912 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.912 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.912 14:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.912 14:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.912 14:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.912 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.912 "name": "Existed_Raid", 00:09:46.912 "uuid": "221d73e4-c8ee-4548-8d9a-e640934d43da", 00:09:46.912 "strip_size_kb": 0, 00:09:46.912 "state": "online", 00:09:46.912 "raid_level": "raid1", 00:09:46.912 "superblock": true, 00:09:46.912 "num_base_bdevs": 3, 00:09:46.912 "num_base_bdevs_discovered": 2, 00:09:46.912 "num_base_bdevs_operational": 2, 00:09:46.913 "base_bdevs_list": [ 00:09:46.913 { 00:09:46.913 "name": null, 00:09:46.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.913 "is_configured": false, 00:09:46.913 "data_offset": 0, 00:09:46.913 "data_size": 63488 00:09:46.913 }, 00:09:46.913 { 00:09:46.913 "name": "BaseBdev2", 00:09:46.913 "uuid": "b1e8dbd0-a4bf-499e-ac51-5f1d8df09202", 00:09:46.913 "is_configured": true, 00:09:46.913 "data_offset": 2048, 00:09:46.913 "data_size": 63488 00:09:46.913 }, 00:09:46.913 { 00:09:46.913 "name": "BaseBdev3", 00:09:46.913 "uuid": "8ba11486-5261-405d-9f8f-fd47278dd917", 00:09:46.913 "is_configured": true, 00:09:46.913 "data_offset": 2048, 00:09:46.913 "data_size": 63488 00:09:46.913 } 00:09:46.913 ] 00:09:46.913 }' 00:09:46.913 14:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.913 14:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.175 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:47.175 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:47.175 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:47.175 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.175 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.175 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.175 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.175 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:47.175 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:47.175 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:47.175 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.175 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.175 [2024-12-09 14:42:25.259148] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:47.436 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.436 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:47.436 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:47.436 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:47.436 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.436 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.436 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.436 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.436 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:47.436 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:47.436 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:47.436 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.436 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.436 [2024-12-09 14:42:25.414642] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:47.436 [2024-12-09 14:42:25.414868] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:47.436 [2024-12-09 14:42:25.510882] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:47.436 [2024-12-09 14:42:25.511016] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:47.436 [2024-12-09 14:42:25.511062] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:47.436 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.436 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:47.436 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:47.436 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.436 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:47.436 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.436 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.436 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.695 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:47.695 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:47.695 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:47.695 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.696 BaseBdev2 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.696 [ 00:09:47.696 { 00:09:47.696 "name": "BaseBdev2", 00:09:47.696 "aliases": [ 00:09:47.696 "619e3bce-f456-4ebf-a49d-ddc999e258d2" 00:09:47.696 ], 00:09:47.696 "product_name": "Malloc disk", 00:09:47.696 "block_size": 512, 00:09:47.696 "num_blocks": 65536, 00:09:47.696 "uuid": "619e3bce-f456-4ebf-a49d-ddc999e258d2", 00:09:47.696 "assigned_rate_limits": { 00:09:47.696 "rw_ios_per_sec": 0, 00:09:47.696 "rw_mbytes_per_sec": 0, 00:09:47.696 "r_mbytes_per_sec": 0, 00:09:47.696 "w_mbytes_per_sec": 0 00:09:47.696 }, 00:09:47.696 "claimed": false, 00:09:47.696 "zoned": false, 00:09:47.696 "supported_io_types": { 00:09:47.696 "read": true, 00:09:47.696 "write": true, 00:09:47.696 "unmap": true, 00:09:47.696 "flush": true, 00:09:47.696 "reset": true, 00:09:47.696 "nvme_admin": false, 00:09:47.696 "nvme_io": false, 00:09:47.696 "nvme_io_md": false, 00:09:47.696 "write_zeroes": true, 00:09:47.696 "zcopy": true, 00:09:47.696 "get_zone_info": false, 00:09:47.696 "zone_management": false, 00:09:47.696 "zone_append": false, 00:09:47.696 "compare": false, 00:09:47.696 "compare_and_write": false, 00:09:47.696 "abort": true, 00:09:47.696 "seek_hole": false, 00:09:47.696 "seek_data": false, 00:09:47.696 "copy": true, 00:09:47.696 "nvme_iov_md": false 00:09:47.696 }, 00:09:47.696 "memory_domains": [ 00:09:47.696 { 00:09:47.696 "dma_device_id": "system", 00:09:47.696 "dma_device_type": 1 00:09:47.696 }, 00:09:47.696 { 00:09:47.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.696 "dma_device_type": 2 00:09:47.696 } 00:09:47.696 ], 00:09:47.696 "driver_specific": {} 00:09:47.696 } 00:09:47.696 ] 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.696 BaseBdev3 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.696 [ 00:09:47.696 { 00:09:47.696 "name": "BaseBdev3", 00:09:47.696 "aliases": [ 00:09:47.696 "22d454cd-acd7-48de-83af-f68099a35e0b" 00:09:47.696 ], 00:09:47.696 "product_name": "Malloc disk", 00:09:47.696 "block_size": 512, 00:09:47.696 "num_blocks": 65536, 00:09:47.696 "uuid": "22d454cd-acd7-48de-83af-f68099a35e0b", 00:09:47.696 "assigned_rate_limits": { 00:09:47.696 "rw_ios_per_sec": 0, 00:09:47.696 "rw_mbytes_per_sec": 0, 00:09:47.696 "r_mbytes_per_sec": 0, 00:09:47.696 "w_mbytes_per_sec": 0 00:09:47.696 }, 00:09:47.696 "claimed": false, 00:09:47.696 "zoned": false, 00:09:47.696 "supported_io_types": { 00:09:47.696 "read": true, 00:09:47.696 "write": true, 00:09:47.696 "unmap": true, 00:09:47.696 "flush": true, 00:09:47.696 "reset": true, 00:09:47.696 "nvme_admin": false, 00:09:47.696 "nvme_io": false, 00:09:47.696 "nvme_io_md": false, 00:09:47.696 "write_zeroes": true, 00:09:47.696 "zcopy": true, 00:09:47.696 "get_zone_info": false, 00:09:47.696 "zone_management": false, 00:09:47.696 "zone_append": false, 00:09:47.696 "compare": false, 00:09:47.696 "compare_and_write": false, 00:09:47.696 "abort": true, 00:09:47.696 "seek_hole": false, 00:09:47.696 "seek_data": false, 00:09:47.696 "copy": true, 00:09:47.696 "nvme_iov_md": false 00:09:47.696 }, 00:09:47.696 "memory_domains": [ 00:09:47.696 { 00:09:47.696 "dma_device_id": "system", 00:09:47.696 "dma_device_type": 1 00:09:47.696 }, 00:09:47.696 { 00:09:47.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.696 "dma_device_type": 2 00:09:47.696 } 00:09:47.696 ], 00:09:47.696 "driver_specific": {} 00:09:47.696 } 00:09:47.696 ] 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.696 [2024-12-09 14:42:25.734887] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:47.696 [2024-12-09 14:42:25.734989] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:47.696 [2024-12-09 14:42:25.735050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:47.696 [2024-12-09 14:42:25.737106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.696 "name": "Existed_Raid", 00:09:47.696 "uuid": "f1542cbe-132a-480f-9aff-09614dae3708", 00:09:47.696 "strip_size_kb": 0, 00:09:47.696 "state": "configuring", 00:09:47.696 "raid_level": "raid1", 00:09:47.696 "superblock": true, 00:09:47.696 "num_base_bdevs": 3, 00:09:47.696 "num_base_bdevs_discovered": 2, 00:09:47.696 "num_base_bdevs_operational": 3, 00:09:47.696 "base_bdevs_list": [ 00:09:47.696 { 00:09:47.696 "name": "BaseBdev1", 00:09:47.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.696 "is_configured": false, 00:09:47.696 "data_offset": 0, 00:09:47.696 "data_size": 0 00:09:47.696 }, 00:09:47.696 { 00:09:47.696 "name": "BaseBdev2", 00:09:47.696 "uuid": "619e3bce-f456-4ebf-a49d-ddc999e258d2", 00:09:47.696 "is_configured": true, 00:09:47.696 "data_offset": 2048, 00:09:47.696 "data_size": 63488 00:09:47.696 }, 00:09:47.696 { 00:09:47.696 "name": "BaseBdev3", 00:09:47.696 "uuid": "22d454cd-acd7-48de-83af-f68099a35e0b", 00:09:47.696 "is_configured": true, 00:09:47.696 "data_offset": 2048, 00:09:47.696 "data_size": 63488 00:09:47.696 } 00:09:47.696 ] 00:09:47.696 }' 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.696 14:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.267 14:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:48.267 14:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.267 14:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.267 [2024-12-09 14:42:26.194103] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:48.267 14:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.267 14:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:48.267 14:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.267 14:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.267 14:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.267 14:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.267 14:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.267 14:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.267 14:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.267 14:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.267 14:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.267 14:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.267 14:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.267 14:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.267 14:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.267 14:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.267 14:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.267 "name": "Existed_Raid", 00:09:48.267 "uuid": "f1542cbe-132a-480f-9aff-09614dae3708", 00:09:48.267 "strip_size_kb": 0, 00:09:48.267 "state": "configuring", 00:09:48.267 "raid_level": "raid1", 00:09:48.267 "superblock": true, 00:09:48.267 "num_base_bdevs": 3, 00:09:48.267 "num_base_bdevs_discovered": 1, 00:09:48.267 "num_base_bdevs_operational": 3, 00:09:48.267 "base_bdevs_list": [ 00:09:48.267 { 00:09:48.267 "name": "BaseBdev1", 00:09:48.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.267 "is_configured": false, 00:09:48.267 "data_offset": 0, 00:09:48.267 "data_size": 0 00:09:48.267 }, 00:09:48.267 { 00:09:48.267 "name": null, 00:09:48.267 "uuid": "619e3bce-f456-4ebf-a49d-ddc999e258d2", 00:09:48.267 "is_configured": false, 00:09:48.267 "data_offset": 0, 00:09:48.267 "data_size": 63488 00:09:48.267 }, 00:09:48.267 { 00:09:48.267 "name": "BaseBdev3", 00:09:48.267 "uuid": "22d454cd-acd7-48de-83af-f68099a35e0b", 00:09:48.267 "is_configured": true, 00:09:48.267 "data_offset": 2048, 00:09:48.267 "data_size": 63488 00:09:48.267 } 00:09:48.267 ] 00:09:48.267 }' 00:09:48.267 14:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.267 14:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.526 14:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.526 14:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.526 14:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.526 14:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:48.785 14:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.785 14:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:48.785 14:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:48.785 14:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.785 14:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.785 [2024-12-09 14:42:26.723473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:48.785 BaseBdev1 00:09:48.785 14:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.785 14:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:48.785 14:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:48.785 14:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:48.785 14:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:48.785 14:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:48.785 14:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:48.785 14:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:48.785 14:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.785 14:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.785 14:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.785 14:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:48.785 14:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.785 14:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.785 [ 00:09:48.785 { 00:09:48.785 "name": "BaseBdev1", 00:09:48.785 "aliases": [ 00:09:48.785 "082f09a4-1730-4d54-96f6-9ffcdf8503a6" 00:09:48.785 ], 00:09:48.785 "product_name": "Malloc disk", 00:09:48.785 "block_size": 512, 00:09:48.785 "num_blocks": 65536, 00:09:48.785 "uuid": "082f09a4-1730-4d54-96f6-9ffcdf8503a6", 00:09:48.785 "assigned_rate_limits": { 00:09:48.785 "rw_ios_per_sec": 0, 00:09:48.785 "rw_mbytes_per_sec": 0, 00:09:48.785 "r_mbytes_per_sec": 0, 00:09:48.785 "w_mbytes_per_sec": 0 00:09:48.785 }, 00:09:48.785 "claimed": true, 00:09:48.785 "claim_type": "exclusive_write", 00:09:48.785 "zoned": false, 00:09:48.785 "supported_io_types": { 00:09:48.786 "read": true, 00:09:48.786 "write": true, 00:09:48.786 "unmap": true, 00:09:48.786 "flush": true, 00:09:48.786 "reset": true, 00:09:48.786 "nvme_admin": false, 00:09:48.786 "nvme_io": false, 00:09:48.786 "nvme_io_md": false, 00:09:48.786 "write_zeroes": true, 00:09:48.786 "zcopy": true, 00:09:48.786 "get_zone_info": false, 00:09:48.786 "zone_management": false, 00:09:48.786 "zone_append": false, 00:09:48.786 "compare": false, 00:09:48.786 "compare_and_write": false, 00:09:48.786 "abort": true, 00:09:48.786 "seek_hole": false, 00:09:48.786 "seek_data": false, 00:09:48.786 "copy": true, 00:09:48.786 "nvme_iov_md": false 00:09:48.786 }, 00:09:48.786 "memory_domains": [ 00:09:48.786 { 00:09:48.786 "dma_device_id": "system", 00:09:48.786 "dma_device_type": 1 00:09:48.786 }, 00:09:48.786 { 00:09:48.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.786 "dma_device_type": 2 00:09:48.786 } 00:09:48.786 ], 00:09:48.786 "driver_specific": {} 00:09:48.786 } 00:09:48.786 ] 00:09:48.786 14:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.786 14:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:48.786 14:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:48.786 14:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.786 14:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.786 14:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.786 14:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.786 14:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.786 14:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.786 14:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.786 14:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.786 14:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.786 14:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.786 14:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.786 14:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.786 14:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.786 14:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.786 14:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.786 "name": "Existed_Raid", 00:09:48.786 "uuid": "f1542cbe-132a-480f-9aff-09614dae3708", 00:09:48.786 "strip_size_kb": 0, 00:09:48.786 "state": "configuring", 00:09:48.786 "raid_level": "raid1", 00:09:48.786 "superblock": true, 00:09:48.786 "num_base_bdevs": 3, 00:09:48.786 "num_base_bdevs_discovered": 2, 00:09:48.786 "num_base_bdevs_operational": 3, 00:09:48.786 "base_bdevs_list": [ 00:09:48.786 { 00:09:48.786 "name": "BaseBdev1", 00:09:48.786 "uuid": "082f09a4-1730-4d54-96f6-9ffcdf8503a6", 00:09:48.786 "is_configured": true, 00:09:48.786 "data_offset": 2048, 00:09:48.786 "data_size": 63488 00:09:48.786 }, 00:09:48.786 { 00:09:48.786 "name": null, 00:09:48.786 "uuid": "619e3bce-f456-4ebf-a49d-ddc999e258d2", 00:09:48.786 "is_configured": false, 00:09:48.786 "data_offset": 0, 00:09:48.786 "data_size": 63488 00:09:48.786 }, 00:09:48.786 { 00:09:48.786 "name": "BaseBdev3", 00:09:48.786 "uuid": "22d454cd-acd7-48de-83af-f68099a35e0b", 00:09:48.786 "is_configured": true, 00:09:48.786 "data_offset": 2048, 00:09:48.786 "data_size": 63488 00:09:48.786 } 00:09:48.786 ] 00:09:48.786 }' 00:09:48.786 14:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.786 14:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.354 14:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.354 14:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:49.354 14:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.354 14:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.354 14:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.354 14:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:49.354 14:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:49.354 14:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.354 14:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.354 [2024-12-09 14:42:27.266677] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:49.354 14:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.354 14:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:49.354 14:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.354 14:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.354 14:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:49.354 14:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:49.354 14:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.354 14:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.354 14:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.354 14:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.354 14:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.354 14:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.354 14:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.354 14:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.354 14:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.354 14:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.354 14:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.354 "name": "Existed_Raid", 00:09:49.354 "uuid": "f1542cbe-132a-480f-9aff-09614dae3708", 00:09:49.354 "strip_size_kb": 0, 00:09:49.354 "state": "configuring", 00:09:49.354 "raid_level": "raid1", 00:09:49.354 "superblock": true, 00:09:49.354 "num_base_bdevs": 3, 00:09:49.354 "num_base_bdevs_discovered": 1, 00:09:49.354 "num_base_bdevs_operational": 3, 00:09:49.354 "base_bdevs_list": [ 00:09:49.354 { 00:09:49.354 "name": "BaseBdev1", 00:09:49.354 "uuid": "082f09a4-1730-4d54-96f6-9ffcdf8503a6", 00:09:49.354 "is_configured": true, 00:09:49.354 "data_offset": 2048, 00:09:49.354 "data_size": 63488 00:09:49.354 }, 00:09:49.354 { 00:09:49.354 "name": null, 00:09:49.354 "uuid": "619e3bce-f456-4ebf-a49d-ddc999e258d2", 00:09:49.354 "is_configured": false, 00:09:49.354 "data_offset": 0, 00:09:49.354 "data_size": 63488 00:09:49.354 }, 00:09:49.354 { 00:09:49.354 "name": null, 00:09:49.354 "uuid": "22d454cd-acd7-48de-83af-f68099a35e0b", 00:09:49.354 "is_configured": false, 00:09:49.354 "data_offset": 0, 00:09:49.354 "data_size": 63488 00:09:49.354 } 00:09:49.354 ] 00:09:49.354 }' 00:09:49.354 14:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.354 14:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.923 14:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.923 14:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:49.923 14:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.923 14:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.923 14:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.923 14:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:49.923 14:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:49.923 14:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.923 14:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.923 [2024-12-09 14:42:27.797807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:49.923 14:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.923 14:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:49.923 14:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.923 14:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.923 14:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:49.923 14:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:49.923 14:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.923 14:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.923 14:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.923 14:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.923 14:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.923 14:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.923 14:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.923 14:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.923 14:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.923 14:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.923 14:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.923 "name": "Existed_Raid", 00:09:49.923 "uuid": "f1542cbe-132a-480f-9aff-09614dae3708", 00:09:49.923 "strip_size_kb": 0, 00:09:49.923 "state": "configuring", 00:09:49.923 "raid_level": "raid1", 00:09:49.923 "superblock": true, 00:09:49.923 "num_base_bdevs": 3, 00:09:49.923 "num_base_bdevs_discovered": 2, 00:09:49.923 "num_base_bdevs_operational": 3, 00:09:49.923 "base_bdevs_list": [ 00:09:49.923 { 00:09:49.923 "name": "BaseBdev1", 00:09:49.923 "uuid": "082f09a4-1730-4d54-96f6-9ffcdf8503a6", 00:09:49.923 "is_configured": true, 00:09:49.923 "data_offset": 2048, 00:09:49.923 "data_size": 63488 00:09:49.923 }, 00:09:49.923 { 00:09:49.923 "name": null, 00:09:49.923 "uuid": "619e3bce-f456-4ebf-a49d-ddc999e258d2", 00:09:49.923 "is_configured": false, 00:09:49.923 "data_offset": 0, 00:09:49.923 "data_size": 63488 00:09:49.923 }, 00:09:49.923 { 00:09:49.923 "name": "BaseBdev3", 00:09:49.923 "uuid": "22d454cd-acd7-48de-83af-f68099a35e0b", 00:09:49.923 "is_configured": true, 00:09:49.923 "data_offset": 2048, 00:09:49.923 "data_size": 63488 00:09:49.923 } 00:09:49.923 ] 00:09:49.923 }' 00:09:49.923 14:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.923 14:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.183 14:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.183 14:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.183 14:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.183 14:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:50.183 14:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.183 14:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:50.183 14:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:50.183 14:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.183 14:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.183 [2024-12-09 14:42:28.241092] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:50.442 14:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.442 14:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:50.442 14:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.442 14:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.442 14:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:50.442 14:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:50.442 14:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.442 14:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.442 14:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.442 14:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.442 14:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.442 14:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.442 14:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.442 14:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.442 14:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.442 14:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.442 14:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.442 "name": "Existed_Raid", 00:09:50.442 "uuid": "f1542cbe-132a-480f-9aff-09614dae3708", 00:09:50.442 "strip_size_kb": 0, 00:09:50.442 "state": "configuring", 00:09:50.442 "raid_level": "raid1", 00:09:50.442 "superblock": true, 00:09:50.442 "num_base_bdevs": 3, 00:09:50.442 "num_base_bdevs_discovered": 1, 00:09:50.442 "num_base_bdevs_operational": 3, 00:09:50.442 "base_bdevs_list": [ 00:09:50.442 { 00:09:50.442 "name": null, 00:09:50.442 "uuid": "082f09a4-1730-4d54-96f6-9ffcdf8503a6", 00:09:50.442 "is_configured": false, 00:09:50.442 "data_offset": 0, 00:09:50.442 "data_size": 63488 00:09:50.442 }, 00:09:50.442 { 00:09:50.442 "name": null, 00:09:50.442 "uuid": "619e3bce-f456-4ebf-a49d-ddc999e258d2", 00:09:50.442 "is_configured": false, 00:09:50.442 "data_offset": 0, 00:09:50.442 "data_size": 63488 00:09:50.442 }, 00:09:50.442 { 00:09:50.442 "name": "BaseBdev3", 00:09:50.442 "uuid": "22d454cd-acd7-48de-83af-f68099a35e0b", 00:09:50.442 "is_configured": true, 00:09:50.442 "data_offset": 2048, 00:09:50.442 "data_size": 63488 00:09:50.442 } 00:09:50.442 ] 00:09:50.442 }' 00:09:50.442 14:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.442 14:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.702 14:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.702 14:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:50.702 14:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.702 14:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.702 14:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.962 14:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:50.962 14:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:50.962 14:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.962 14:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.962 [2024-12-09 14:42:28.853716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:50.962 14:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.962 14:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:50.962 14:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.962 14:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.962 14:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:50.962 14:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:50.962 14:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.962 14:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.962 14:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.962 14:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.962 14:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.962 14:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.962 14:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.962 14:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.962 14:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.962 14:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.962 14:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.962 "name": "Existed_Raid", 00:09:50.962 "uuid": "f1542cbe-132a-480f-9aff-09614dae3708", 00:09:50.962 "strip_size_kb": 0, 00:09:50.962 "state": "configuring", 00:09:50.962 "raid_level": "raid1", 00:09:50.962 "superblock": true, 00:09:50.962 "num_base_bdevs": 3, 00:09:50.962 "num_base_bdevs_discovered": 2, 00:09:50.962 "num_base_bdevs_operational": 3, 00:09:50.962 "base_bdevs_list": [ 00:09:50.962 { 00:09:50.962 "name": null, 00:09:50.962 "uuid": "082f09a4-1730-4d54-96f6-9ffcdf8503a6", 00:09:50.962 "is_configured": false, 00:09:50.962 "data_offset": 0, 00:09:50.962 "data_size": 63488 00:09:50.962 }, 00:09:50.962 { 00:09:50.962 "name": "BaseBdev2", 00:09:50.962 "uuid": "619e3bce-f456-4ebf-a49d-ddc999e258d2", 00:09:50.962 "is_configured": true, 00:09:50.962 "data_offset": 2048, 00:09:50.962 "data_size": 63488 00:09:50.962 }, 00:09:50.962 { 00:09:50.962 "name": "BaseBdev3", 00:09:50.962 "uuid": "22d454cd-acd7-48de-83af-f68099a35e0b", 00:09:50.962 "is_configured": true, 00:09:50.962 "data_offset": 2048, 00:09:50.962 "data_size": 63488 00:09:50.962 } 00:09:50.962 ] 00:09:50.962 }' 00:09:50.962 14:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.962 14:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.222 14:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.222 14:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.222 14:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.222 14:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:51.222 14:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.222 14:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:51.222 14:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.222 14:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:51.222 14:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.222 14:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.506 14:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.506 14:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 082f09a4-1730-4d54-96f6-9ffcdf8503a6 00:09:51.506 14:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.506 14:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.506 [2024-12-09 14:42:29.418783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:51.506 [2024-12-09 14:42:29.419040] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:51.506 [2024-12-09 14:42:29.419054] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:51.506 [2024-12-09 14:42:29.419303] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:51.506 [2024-12-09 14:42:29.419460] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:51.506 [2024-12-09 14:42:29.419471] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:51.506 [2024-12-09 14:42:29.419636] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:51.506 NewBaseBdev 00:09:51.506 14:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.506 14:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:51.506 14:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:51.506 14:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:51.506 14:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:51.506 14:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:51.506 14:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:51.506 14:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:51.506 14:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.506 14:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.506 14:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.506 14:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:51.506 14:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.506 14:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.506 [ 00:09:51.506 { 00:09:51.506 "name": "NewBaseBdev", 00:09:51.506 "aliases": [ 00:09:51.506 "082f09a4-1730-4d54-96f6-9ffcdf8503a6" 00:09:51.506 ], 00:09:51.506 "product_name": "Malloc disk", 00:09:51.506 "block_size": 512, 00:09:51.506 "num_blocks": 65536, 00:09:51.506 "uuid": "082f09a4-1730-4d54-96f6-9ffcdf8503a6", 00:09:51.506 "assigned_rate_limits": { 00:09:51.506 "rw_ios_per_sec": 0, 00:09:51.506 "rw_mbytes_per_sec": 0, 00:09:51.506 "r_mbytes_per_sec": 0, 00:09:51.506 "w_mbytes_per_sec": 0 00:09:51.506 }, 00:09:51.506 "claimed": true, 00:09:51.506 "claim_type": "exclusive_write", 00:09:51.506 "zoned": false, 00:09:51.506 "supported_io_types": { 00:09:51.506 "read": true, 00:09:51.506 "write": true, 00:09:51.506 "unmap": true, 00:09:51.506 "flush": true, 00:09:51.506 "reset": true, 00:09:51.506 "nvme_admin": false, 00:09:51.506 "nvme_io": false, 00:09:51.506 "nvme_io_md": false, 00:09:51.506 "write_zeroes": true, 00:09:51.506 "zcopy": true, 00:09:51.506 "get_zone_info": false, 00:09:51.506 "zone_management": false, 00:09:51.506 "zone_append": false, 00:09:51.506 "compare": false, 00:09:51.506 "compare_and_write": false, 00:09:51.506 "abort": true, 00:09:51.506 "seek_hole": false, 00:09:51.506 "seek_data": false, 00:09:51.506 "copy": true, 00:09:51.506 "nvme_iov_md": false 00:09:51.506 }, 00:09:51.506 "memory_domains": [ 00:09:51.506 { 00:09:51.506 "dma_device_id": "system", 00:09:51.506 "dma_device_type": 1 00:09:51.506 }, 00:09:51.506 { 00:09:51.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.506 "dma_device_type": 2 00:09:51.506 } 00:09:51.506 ], 00:09:51.506 "driver_specific": {} 00:09:51.506 } 00:09:51.506 ] 00:09:51.506 14:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.506 14:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:51.506 14:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:51.506 14:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.506 14:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:51.506 14:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:51.506 14:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:51.506 14:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:51.506 14:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.506 14:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.506 14:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.506 14:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.506 14:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.506 14:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.506 14:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.506 14:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.506 14:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.506 14:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.506 "name": "Existed_Raid", 00:09:51.506 "uuid": "f1542cbe-132a-480f-9aff-09614dae3708", 00:09:51.506 "strip_size_kb": 0, 00:09:51.506 "state": "online", 00:09:51.506 "raid_level": "raid1", 00:09:51.506 "superblock": true, 00:09:51.506 "num_base_bdevs": 3, 00:09:51.506 "num_base_bdevs_discovered": 3, 00:09:51.506 "num_base_bdevs_operational": 3, 00:09:51.506 "base_bdevs_list": [ 00:09:51.506 { 00:09:51.506 "name": "NewBaseBdev", 00:09:51.506 "uuid": "082f09a4-1730-4d54-96f6-9ffcdf8503a6", 00:09:51.506 "is_configured": true, 00:09:51.506 "data_offset": 2048, 00:09:51.506 "data_size": 63488 00:09:51.506 }, 00:09:51.506 { 00:09:51.506 "name": "BaseBdev2", 00:09:51.506 "uuid": "619e3bce-f456-4ebf-a49d-ddc999e258d2", 00:09:51.506 "is_configured": true, 00:09:51.506 "data_offset": 2048, 00:09:51.506 "data_size": 63488 00:09:51.506 }, 00:09:51.506 { 00:09:51.506 "name": "BaseBdev3", 00:09:51.506 "uuid": "22d454cd-acd7-48de-83af-f68099a35e0b", 00:09:51.506 "is_configured": true, 00:09:51.506 "data_offset": 2048, 00:09:51.506 "data_size": 63488 00:09:51.506 } 00:09:51.506 ] 00:09:51.506 }' 00:09:51.506 14:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.506 14:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.075 14:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:52.075 14:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:52.075 14:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:52.075 14:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:52.075 14:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:52.075 14:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:52.075 14:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:52.075 14:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:52.075 14:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.075 14:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.075 [2024-12-09 14:42:29.922287] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:52.075 14:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.075 14:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:52.075 "name": "Existed_Raid", 00:09:52.075 "aliases": [ 00:09:52.075 "f1542cbe-132a-480f-9aff-09614dae3708" 00:09:52.075 ], 00:09:52.075 "product_name": "Raid Volume", 00:09:52.075 "block_size": 512, 00:09:52.075 "num_blocks": 63488, 00:09:52.075 "uuid": "f1542cbe-132a-480f-9aff-09614dae3708", 00:09:52.075 "assigned_rate_limits": { 00:09:52.075 "rw_ios_per_sec": 0, 00:09:52.075 "rw_mbytes_per_sec": 0, 00:09:52.075 "r_mbytes_per_sec": 0, 00:09:52.075 "w_mbytes_per_sec": 0 00:09:52.075 }, 00:09:52.075 "claimed": false, 00:09:52.075 "zoned": false, 00:09:52.075 "supported_io_types": { 00:09:52.075 "read": true, 00:09:52.075 "write": true, 00:09:52.075 "unmap": false, 00:09:52.075 "flush": false, 00:09:52.075 "reset": true, 00:09:52.075 "nvme_admin": false, 00:09:52.075 "nvme_io": false, 00:09:52.075 "nvme_io_md": false, 00:09:52.075 "write_zeroes": true, 00:09:52.075 "zcopy": false, 00:09:52.075 "get_zone_info": false, 00:09:52.075 "zone_management": false, 00:09:52.075 "zone_append": false, 00:09:52.075 "compare": false, 00:09:52.075 "compare_and_write": false, 00:09:52.075 "abort": false, 00:09:52.075 "seek_hole": false, 00:09:52.075 "seek_data": false, 00:09:52.075 "copy": false, 00:09:52.075 "nvme_iov_md": false 00:09:52.075 }, 00:09:52.075 "memory_domains": [ 00:09:52.075 { 00:09:52.075 "dma_device_id": "system", 00:09:52.075 "dma_device_type": 1 00:09:52.075 }, 00:09:52.075 { 00:09:52.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.075 "dma_device_type": 2 00:09:52.075 }, 00:09:52.075 { 00:09:52.075 "dma_device_id": "system", 00:09:52.075 "dma_device_type": 1 00:09:52.075 }, 00:09:52.075 { 00:09:52.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.075 "dma_device_type": 2 00:09:52.075 }, 00:09:52.075 { 00:09:52.075 "dma_device_id": "system", 00:09:52.075 "dma_device_type": 1 00:09:52.075 }, 00:09:52.075 { 00:09:52.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.075 "dma_device_type": 2 00:09:52.075 } 00:09:52.075 ], 00:09:52.075 "driver_specific": { 00:09:52.075 "raid": { 00:09:52.075 "uuid": "f1542cbe-132a-480f-9aff-09614dae3708", 00:09:52.075 "strip_size_kb": 0, 00:09:52.075 "state": "online", 00:09:52.075 "raid_level": "raid1", 00:09:52.075 "superblock": true, 00:09:52.075 "num_base_bdevs": 3, 00:09:52.075 "num_base_bdevs_discovered": 3, 00:09:52.075 "num_base_bdevs_operational": 3, 00:09:52.075 "base_bdevs_list": [ 00:09:52.075 { 00:09:52.075 "name": "NewBaseBdev", 00:09:52.075 "uuid": "082f09a4-1730-4d54-96f6-9ffcdf8503a6", 00:09:52.075 "is_configured": true, 00:09:52.075 "data_offset": 2048, 00:09:52.075 "data_size": 63488 00:09:52.075 }, 00:09:52.075 { 00:09:52.075 "name": "BaseBdev2", 00:09:52.075 "uuid": "619e3bce-f456-4ebf-a49d-ddc999e258d2", 00:09:52.075 "is_configured": true, 00:09:52.075 "data_offset": 2048, 00:09:52.075 "data_size": 63488 00:09:52.075 }, 00:09:52.075 { 00:09:52.075 "name": "BaseBdev3", 00:09:52.075 "uuid": "22d454cd-acd7-48de-83af-f68099a35e0b", 00:09:52.075 "is_configured": true, 00:09:52.075 "data_offset": 2048, 00:09:52.075 "data_size": 63488 00:09:52.075 } 00:09:52.075 ] 00:09:52.075 } 00:09:52.075 } 00:09:52.075 }' 00:09:52.075 14:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:52.075 14:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:52.075 BaseBdev2 00:09:52.075 BaseBdev3' 00:09:52.075 14:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:52.075 14:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:52.075 14:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:52.075 14:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:52.075 14:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.075 14:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.075 14:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:52.075 14:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.075 14:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:52.075 14:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:52.075 14:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:52.075 14:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:52.075 14:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:52.075 14:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.075 14:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.075 14:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.075 14:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:52.075 14:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:52.075 14:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:52.075 14:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:52.075 14:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:52.075 14:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.075 14:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.075 14:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.075 14:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:52.075 14:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:52.076 14:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:52.076 14:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.076 14:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.076 [2024-12-09 14:42:30.185528] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:52.076 [2024-12-09 14:42:30.185561] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:52.076 [2024-12-09 14:42:30.185642] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:52.076 [2024-12-09 14:42:30.185922] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:52.076 [2024-12-09 14:42:30.185933] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:52.076 14:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.076 14:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 69297 00:09:52.076 14:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 69297 ']' 00:09:52.076 14:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 69297 00:09:52.076 14:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:52.335 14:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:52.335 14:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69297 00:09:52.335 14:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:52.335 14:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:52.335 14:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69297' 00:09:52.335 killing process with pid 69297 00:09:52.335 14:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 69297 00:09:52.335 [2024-12-09 14:42:30.234754] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:52.335 14:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 69297 00:09:52.593 [2024-12-09 14:42:30.536119] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:53.973 ************************************ 00:09:53.973 END TEST raid_state_function_test_sb 00:09:53.973 ************************************ 00:09:53.973 14:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:53.973 00:09:53.973 real 0m10.691s 00:09:53.973 user 0m17.023s 00:09:53.973 sys 0m1.813s 00:09:53.973 14:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.973 14:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.973 14:42:31 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:09:53.973 14:42:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:53.973 14:42:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.973 14:42:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:53.973 ************************************ 00:09:53.973 START TEST raid_superblock_test 00:09:53.973 ************************************ 00:09:53.973 14:42:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:09:53.973 14:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:53.973 14:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:53.973 14:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:53.973 14:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:53.973 14:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:53.973 14:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:53.973 14:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:53.973 14:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:53.973 14:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:53.973 14:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:53.973 14:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:53.973 14:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:53.973 14:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:53.973 14:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:53.973 14:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:53.973 14:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=69917 00:09:53.973 14:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:53.973 14:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 69917 00:09:53.973 14:42:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 69917 ']' 00:09:53.973 14:42:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.973 14:42:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:53.973 14:42:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.973 14:42:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:53.973 14:42:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.973 [2024-12-09 14:42:31.840663] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:09:53.973 [2024-12-09 14:42:31.840872] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69917 ] 00:09:53.973 [2024-12-09 14:42:32.012677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.232 [2024-12-09 14:42:32.125227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.232 [2024-12-09 14:42:32.325654] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:54.232 [2024-12-09 14:42:32.325807] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.801 malloc1 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.801 [2024-12-09 14:42:32.726037] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:54.801 [2024-12-09 14:42:32.726140] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.801 [2024-12-09 14:42:32.726180] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:54.801 [2024-12-09 14:42:32.726208] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.801 [2024-12-09 14:42:32.728344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.801 [2024-12-09 14:42:32.728419] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:54.801 pt1 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.801 malloc2 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.801 [2024-12-09 14:42:32.784522] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:54.801 [2024-12-09 14:42:32.784589] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.801 [2024-12-09 14:42:32.784614] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:54.801 [2024-12-09 14:42:32.784623] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.801 [2024-12-09 14:42:32.786698] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.801 [2024-12-09 14:42:32.786796] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:54.801 pt2 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.801 malloc3 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.801 [2024-12-09 14:42:32.852527] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:54.801 [2024-12-09 14:42:32.852659] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.801 [2024-12-09 14:42:32.852707] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:54.801 [2024-12-09 14:42:32.852752] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.801 [2024-12-09 14:42:32.855050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.801 [2024-12-09 14:42:32.855146] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:54.801 pt3 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.801 [2024-12-09 14:42:32.864538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:54.801 [2024-12-09 14:42:32.866381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:54.801 [2024-12-09 14:42:32.866490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:54.801 [2024-12-09 14:42:32.866678] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:54.801 [2024-12-09 14:42:32.866757] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:54.801 [2024-12-09 14:42:32.867039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:54.801 [2024-12-09 14:42:32.867276] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:54.801 [2024-12-09 14:42:32.867329] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:54.801 [2024-12-09 14:42:32.867544] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:54.801 14:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.060 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.060 "name": "raid_bdev1", 00:09:55.060 "uuid": "78d24403-f78d-473b-8a94-c44d87602e08", 00:09:55.060 "strip_size_kb": 0, 00:09:55.060 "state": "online", 00:09:55.060 "raid_level": "raid1", 00:09:55.060 "superblock": true, 00:09:55.060 "num_base_bdevs": 3, 00:09:55.060 "num_base_bdevs_discovered": 3, 00:09:55.060 "num_base_bdevs_operational": 3, 00:09:55.060 "base_bdevs_list": [ 00:09:55.060 { 00:09:55.060 "name": "pt1", 00:09:55.060 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:55.060 "is_configured": true, 00:09:55.060 "data_offset": 2048, 00:09:55.060 "data_size": 63488 00:09:55.060 }, 00:09:55.060 { 00:09:55.060 "name": "pt2", 00:09:55.060 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:55.060 "is_configured": true, 00:09:55.060 "data_offset": 2048, 00:09:55.060 "data_size": 63488 00:09:55.060 }, 00:09:55.060 { 00:09:55.060 "name": "pt3", 00:09:55.060 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:55.060 "is_configured": true, 00:09:55.060 "data_offset": 2048, 00:09:55.060 "data_size": 63488 00:09:55.060 } 00:09:55.060 ] 00:09:55.060 }' 00:09:55.060 14:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.060 14:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.319 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:55.319 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:55.319 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:55.319 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:55.319 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:55.319 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:55.319 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:55.319 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:55.319 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.319 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.319 [2024-12-09 14:42:33.376096] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:55.319 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.319 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:55.319 "name": "raid_bdev1", 00:09:55.319 "aliases": [ 00:09:55.319 "78d24403-f78d-473b-8a94-c44d87602e08" 00:09:55.319 ], 00:09:55.319 "product_name": "Raid Volume", 00:09:55.319 "block_size": 512, 00:09:55.319 "num_blocks": 63488, 00:09:55.319 "uuid": "78d24403-f78d-473b-8a94-c44d87602e08", 00:09:55.319 "assigned_rate_limits": { 00:09:55.319 "rw_ios_per_sec": 0, 00:09:55.319 "rw_mbytes_per_sec": 0, 00:09:55.319 "r_mbytes_per_sec": 0, 00:09:55.319 "w_mbytes_per_sec": 0 00:09:55.319 }, 00:09:55.319 "claimed": false, 00:09:55.319 "zoned": false, 00:09:55.319 "supported_io_types": { 00:09:55.319 "read": true, 00:09:55.319 "write": true, 00:09:55.319 "unmap": false, 00:09:55.319 "flush": false, 00:09:55.319 "reset": true, 00:09:55.319 "nvme_admin": false, 00:09:55.319 "nvme_io": false, 00:09:55.319 "nvme_io_md": false, 00:09:55.319 "write_zeroes": true, 00:09:55.319 "zcopy": false, 00:09:55.319 "get_zone_info": false, 00:09:55.319 "zone_management": false, 00:09:55.319 "zone_append": false, 00:09:55.319 "compare": false, 00:09:55.319 "compare_and_write": false, 00:09:55.319 "abort": false, 00:09:55.319 "seek_hole": false, 00:09:55.319 "seek_data": false, 00:09:55.319 "copy": false, 00:09:55.319 "nvme_iov_md": false 00:09:55.319 }, 00:09:55.319 "memory_domains": [ 00:09:55.319 { 00:09:55.319 "dma_device_id": "system", 00:09:55.319 "dma_device_type": 1 00:09:55.319 }, 00:09:55.319 { 00:09:55.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.319 "dma_device_type": 2 00:09:55.319 }, 00:09:55.319 { 00:09:55.319 "dma_device_id": "system", 00:09:55.319 "dma_device_type": 1 00:09:55.319 }, 00:09:55.319 { 00:09:55.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.319 "dma_device_type": 2 00:09:55.319 }, 00:09:55.319 { 00:09:55.319 "dma_device_id": "system", 00:09:55.319 "dma_device_type": 1 00:09:55.319 }, 00:09:55.319 { 00:09:55.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.319 "dma_device_type": 2 00:09:55.319 } 00:09:55.319 ], 00:09:55.319 "driver_specific": { 00:09:55.319 "raid": { 00:09:55.319 "uuid": "78d24403-f78d-473b-8a94-c44d87602e08", 00:09:55.319 "strip_size_kb": 0, 00:09:55.319 "state": "online", 00:09:55.319 "raid_level": "raid1", 00:09:55.319 "superblock": true, 00:09:55.319 "num_base_bdevs": 3, 00:09:55.319 "num_base_bdevs_discovered": 3, 00:09:55.319 "num_base_bdevs_operational": 3, 00:09:55.319 "base_bdevs_list": [ 00:09:55.319 { 00:09:55.319 "name": "pt1", 00:09:55.319 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:55.319 "is_configured": true, 00:09:55.319 "data_offset": 2048, 00:09:55.319 "data_size": 63488 00:09:55.319 }, 00:09:55.319 { 00:09:55.319 "name": "pt2", 00:09:55.319 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:55.319 "is_configured": true, 00:09:55.319 "data_offset": 2048, 00:09:55.319 "data_size": 63488 00:09:55.319 }, 00:09:55.319 { 00:09:55.319 "name": "pt3", 00:09:55.319 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:55.319 "is_configured": true, 00:09:55.319 "data_offset": 2048, 00:09:55.319 "data_size": 63488 00:09:55.319 } 00:09:55.319 ] 00:09:55.319 } 00:09:55.319 } 00:09:55.319 }' 00:09:55.319 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:55.319 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:55.319 pt2 00:09:55.319 pt3' 00:09:55.319 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.579 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:55.579 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.579 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:55.579 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.579 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.579 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.579 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.579 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.579 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.579 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.579 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:55.579 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.579 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.579 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.579 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.579 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.579 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.579 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.579 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.579 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:55.579 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.579 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.579 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.579 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.579 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.579 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:55.579 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:55.579 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.579 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.579 [2024-12-09 14:42:33.635615] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:55.579 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.579 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=78d24403-f78d-473b-8a94-c44d87602e08 00:09:55.579 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 78d24403-f78d-473b-8a94-c44d87602e08 ']' 00:09:55.579 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:55.579 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.579 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.579 [2024-12-09 14:42:33.667190] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:55.579 [2024-12-09 14:42:33.667264] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:55.579 [2024-12-09 14:42:33.667350] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:55.579 [2024-12-09 14:42:33.667437] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:55.579 [2024-12-09 14:42:33.667448] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:55.579 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.579 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.579 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.579 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:55.579 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.579 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.840 [2024-12-09 14:42:33.818957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:55.840 [2024-12-09 14:42:33.820822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:55.840 [2024-12-09 14:42:33.820889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:55.840 [2024-12-09 14:42:33.820943] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:55.840 [2024-12-09 14:42:33.820997] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:55.840 [2024-12-09 14:42:33.821017] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:55.840 [2024-12-09 14:42:33.821034] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:55.840 [2024-12-09 14:42:33.821043] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:55.840 request: 00:09:55.840 { 00:09:55.840 "name": "raid_bdev1", 00:09:55.840 "raid_level": "raid1", 00:09:55.840 "base_bdevs": [ 00:09:55.840 "malloc1", 00:09:55.840 "malloc2", 00:09:55.840 "malloc3" 00:09:55.840 ], 00:09:55.840 "superblock": false, 00:09:55.840 "method": "bdev_raid_create", 00:09:55.840 "req_id": 1 00:09:55.840 } 00:09:55.840 Got JSON-RPC error response 00:09:55.840 response: 00:09:55.840 { 00:09:55.840 "code": -17, 00:09:55.840 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:55.840 } 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.840 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:55.841 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:55.841 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:55.841 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.841 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.841 [2024-12-09 14:42:33.870836] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:55.841 [2024-12-09 14:42:33.870933] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.841 [2024-12-09 14:42:33.870970] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:55.841 [2024-12-09 14:42:33.871002] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.841 [2024-12-09 14:42:33.873354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.841 [2024-12-09 14:42:33.873428] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:55.841 [2024-12-09 14:42:33.873535] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:55.841 [2024-12-09 14:42:33.873648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:55.841 pt1 00:09:55.841 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.841 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:55.841 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:55.841 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.841 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:55.841 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:55.841 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.841 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.841 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.841 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.841 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.841 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:55.841 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.841 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.841 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.841 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.841 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.841 "name": "raid_bdev1", 00:09:55.841 "uuid": "78d24403-f78d-473b-8a94-c44d87602e08", 00:09:55.841 "strip_size_kb": 0, 00:09:55.841 "state": "configuring", 00:09:55.841 "raid_level": "raid1", 00:09:55.841 "superblock": true, 00:09:55.841 "num_base_bdevs": 3, 00:09:55.841 "num_base_bdevs_discovered": 1, 00:09:55.841 "num_base_bdevs_operational": 3, 00:09:55.841 "base_bdevs_list": [ 00:09:55.841 { 00:09:55.841 "name": "pt1", 00:09:55.841 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:55.841 "is_configured": true, 00:09:55.841 "data_offset": 2048, 00:09:55.841 "data_size": 63488 00:09:55.841 }, 00:09:55.841 { 00:09:55.841 "name": null, 00:09:55.841 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:55.841 "is_configured": false, 00:09:55.841 "data_offset": 2048, 00:09:55.841 "data_size": 63488 00:09:55.841 }, 00:09:55.841 { 00:09:55.841 "name": null, 00:09:55.841 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:55.841 "is_configured": false, 00:09:55.841 "data_offset": 2048, 00:09:55.841 "data_size": 63488 00:09:55.841 } 00:09:55.841 ] 00:09:55.841 }' 00:09:55.841 14:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.841 14:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.410 14:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:56.410 14:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:56.410 14:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.410 14:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.410 [2024-12-09 14:42:34.286201] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:56.410 [2024-12-09 14:42:34.286336] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.410 [2024-12-09 14:42:34.286370] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:56.410 [2024-12-09 14:42:34.286380] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.410 [2024-12-09 14:42:34.286914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.410 [2024-12-09 14:42:34.286945] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:56.410 [2024-12-09 14:42:34.287051] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:56.410 [2024-12-09 14:42:34.287077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:56.410 pt2 00:09:56.410 14:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.410 14:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:56.410 14:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.410 14:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.410 [2024-12-09 14:42:34.294210] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:56.410 14:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.410 14:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:56.410 14:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:56.410 14:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.410 14:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:56.410 14:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:56.410 14:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.410 14:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.410 14:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.410 14:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.410 14:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.410 14:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.410 14:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:56.410 14:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.410 14:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.410 14:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.410 14:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.410 "name": "raid_bdev1", 00:09:56.410 "uuid": "78d24403-f78d-473b-8a94-c44d87602e08", 00:09:56.410 "strip_size_kb": 0, 00:09:56.410 "state": "configuring", 00:09:56.410 "raid_level": "raid1", 00:09:56.410 "superblock": true, 00:09:56.410 "num_base_bdevs": 3, 00:09:56.410 "num_base_bdevs_discovered": 1, 00:09:56.410 "num_base_bdevs_operational": 3, 00:09:56.410 "base_bdevs_list": [ 00:09:56.410 { 00:09:56.411 "name": "pt1", 00:09:56.411 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:56.411 "is_configured": true, 00:09:56.411 "data_offset": 2048, 00:09:56.411 "data_size": 63488 00:09:56.411 }, 00:09:56.411 { 00:09:56.411 "name": null, 00:09:56.411 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:56.411 "is_configured": false, 00:09:56.411 "data_offset": 0, 00:09:56.411 "data_size": 63488 00:09:56.411 }, 00:09:56.411 { 00:09:56.411 "name": null, 00:09:56.411 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:56.411 "is_configured": false, 00:09:56.411 "data_offset": 2048, 00:09:56.411 "data_size": 63488 00:09:56.411 } 00:09:56.411 ] 00:09:56.411 }' 00:09:56.411 14:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.411 14:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.670 14:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:56.670 14:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:56.670 14:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:56.670 14:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.670 14:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.670 [2024-12-09 14:42:34.713462] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:56.670 [2024-12-09 14:42:34.713612] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.670 [2024-12-09 14:42:34.713665] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:56.670 [2024-12-09 14:42:34.713704] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.670 [2024-12-09 14:42:34.714203] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.670 [2024-12-09 14:42:34.714270] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:56.670 [2024-12-09 14:42:34.714393] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:56.670 [2024-12-09 14:42:34.714465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:56.670 pt2 00:09:56.670 14:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.670 14:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:56.670 14:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:56.670 14:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:56.670 14:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.670 14:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.670 [2024-12-09 14:42:34.725400] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:56.670 [2024-12-09 14:42:34.725450] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.670 [2024-12-09 14:42:34.725464] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:56.670 [2024-12-09 14:42:34.725474] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.671 [2024-12-09 14:42:34.725887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.671 [2024-12-09 14:42:34.725916] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:56.671 [2024-12-09 14:42:34.725976] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:56.671 [2024-12-09 14:42:34.725997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:56.671 [2024-12-09 14:42:34.726124] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:56.671 [2024-12-09 14:42:34.726137] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:56.671 [2024-12-09 14:42:34.726369] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:56.671 [2024-12-09 14:42:34.726529] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:56.671 [2024-12-09 14:42:34.726537] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:56.671 [2024-12-09 14:42:34.726690] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:56.671 pt3 00:09:56.671 14:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.671 14:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:56.671 14:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:56.671 14:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:56.671 14:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:56.671 14:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:56.671 14:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:56.671 14:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:56.671 14:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.671 14:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.671 14:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.671 14:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.671 14:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.671 14:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.671 14:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.671 14:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.671 14:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:56.671 14:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.671 14:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.671 "name": "raid_bdev1", 00:09:56.671 "uuid": "78d24403-f78d-473b-8a94-c44d87602e08", 00:09:56.671 "strip_size_kb": 0, 00:09:56.671 "state": "online", 00:09:56.671 "raid_level": "raid1", 00:09:56.671 "superblock": true, 00:09:56.671 "num_base_bdevs": 3, 00:09:56.671 "num_base_bdevs_discovered": 3, 00:09:56.671 "num_base_bdevs_operational": 3, 00:09:56.671 "base_bdevs_list": [ 00:09:56.671 { 00:09:56.671 "name": "pt1", 00:09:56.671 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:56.671 "is_configured": true, 00:09:56.671 "data_offset": 2048, 00:09:56.671 "data_size": 63488 00:09:56.671 }, 00:09:56.671 { 00:09:56.671 "name": "pt2", 00:09:56.671 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:56.671 "is_configured": true, 00:09:56.671 "data_offset": 2048, 00:09:56.671 "data_size": 63488 00:09:56.671 }, 00:09:56.671 { 00:09:56.671 "name": "pt3", 00:09:56.671 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:56.671 "is_configured": true, 00:09:56.671 "data_offset": 2048, 00:09:56.671 "data_size": 63488 00:09:56.671 } 00:09:56.671 ] 00:09:56.671 }' 00:09:56.671 14:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.671 14:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.238 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:57.238 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:57.238 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:57.238 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:57.238 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:57.238 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:57.238 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:57.238 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:57.238 14:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.238 14:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.238 [2024-12-09 14:42:35.141056] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:57.238 14:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.238 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:57.238 "name": "raid_bdev1", 00:09:57.238 "aliases": [ 00:09:57.238 "78d24403-f78d-473b-8a94-c44d87602e08" 00:09:57.238 ], 00:09:57.238 "product_name": "Raid Volume", 00:09:57.238 "block_size": 512, 00:09:57.238 "num_blocks": 63488, 00:09:57.238 "uuid": "78d24403-f78d-473b-8a94-c44d87602e08", 00:09:57.238 "assigned_rate_limits": { 00:09:57.238 "rw_ios_per_sec": 0, 00:09:57.238 "rw_mbytes_per_sec": 0, 00:09:57.238 "r_mbytes_per_sec": 0, 00:09:57.238 "w_mbytes_per_sec": 0 00:09:57.238 }, 00:09:57.238 "claimed": false, 00:09:57.238 "zoned": false, 00:09:57.238 "supported_io_types": { 00:09:57.238 "read": true, 00:09:57.238 "write": true, 00:09:57.238 "unmap": false, 00:09:57.238 "flush": false, 00:09:57.238 "reset": true, 00:09:57.238 "nvme_admin": false, 00:09:57.238 "nvme_io": false, 00:09:57.238 "nvme_io_md": false, 00:09:57.238 "write_zeroes": true, 00:09:57.238 "zcopy": false, 00:09:57.238 "get_zone_info": false, 00:09:57.238 "zone_management": false, 00:09:57.238 "zone_append": false, 00:09:57.238 "compare": false, 00:09:57.238 "compare_and_write": false, 00:09:57.238 "abort": false, 00:09:57.238 "seek_hole": false, 00:09:57.238 "seek_data": false, 00:09:57.238 "copy": false, 00:09:57.238 "nvme_iov_md": false 00:09:57.238 }, 00:09:57.238 "memory_domains": [ 00:09:57.238 { 00:09:57.238 "dma_device_id": "system", 00:09:57.238 "dma_device_type": 1 00:09:57.238 }, 00:09:57.238 { 00:09:57.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.238 "dma_device_type": 2 00:09:57.238 }, 00:09:57.238 { 00:09:57.238 "dma_device_id": "system", 00:09:57.238 "dma_device_type": 1 00:09:57.238 }, 00:09:57.238 { 00:09:57.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.238 "dma_device_type": 2 00:09:57.238 }, 00:09:57.238 { 00:09:57.238 "dma_device_id": "system", 00:09:57.238 "dma_device_type": 1 00:09:57.238 }, 00:09:57.238 { 00:09:57.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.238 "dma_device_type": 2 00:09:57.238 } 00:09:57.238 ], 00:09:57.238 "driver_specific": { 00:09:57.238 "raid": { 00:09:57.238 "uuid": "78d24403-f78d-473b-8a94-c44d87602e08", 00:09:57.238 "strip_size_kb": 0, 00:09:57.238 "state": "online", 00:09:57.238 "raid_level": "raid1", 00:09:57.238 "superblock": true, 00:09:57.238 "num_base_bdevs": 3, 00:09:57.238 "num_base_bdevs_discovered": 3, 00:09:57.238 "num_base_bdevs_operational": 3, 00:09:57.238 "base_bdevs_list": [ 00:09:57.238 { 00:09:57.238 "name": "pt1", 00:09:57.238 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:57.238 "is_configured": true, 00:09:57.238 "data_offset": 2048, 00:09:57.238 "data_size": 63488 00:09:57.238 }, 00:09:57.238 { 00:09:57.238 "name": "pt2", 00:09:57.238 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:57.238 "is_configured": true, 00:09:57.238 "data_offset": 2048, 00:09:57.238 "data_size": 63488 00:09:57.238 }, 00:09:57.238 { 00:09:57.238 "name": "pt3", 00:09:57.238 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:57.238 "is_configured": true, 00:09:57.238 "data_offset": 2048, 00:09:57.238 "data_size": 63488 00:09:57.238 } 00:09:57.238 ] 00:09:57.238 } 00:09:57.238 } 00:09:57.238 }' 00:09:57.238 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:57.238 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:57.238 pt2 00:09:57.238 pt3' 00:09:57.238 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.238 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:57.238 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.238 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.238 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:57.238 14:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.238 14:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.238 14:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.238 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.238 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.238 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.238 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:57.238 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.238 14:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.238 14:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.540 [2024-12-09 14:42:35.448528] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 78d24403-f78d-473b-8a94-c44d87602e08 '!=' 78d24403-f78d-473b-8a94-c44d87602e08 ']' 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.540 [2024-12-09 14:42:35.496162] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.540 "name": "raid_bdev1", 00:09:57.540 "uuid": "78d24403-f78d-473b-8a94-c44d87602e08", 00:09:57.540 "strip_size_kb": 0, 00:09:57.540 "state": "online", 00:09:57.540 "raid_level": "raid1", 00:09:57.540 "superblock": true, 00:09:57.540 "num_base_bdevs": 3, 00:09:57.540 "num_base_bdevs_discovered": 2, 00:09:57.540 "num_base_bdevs_operational": 2, 00:09:57.540 "base_bdevs_list": [ 00:09:57.540 { 00:09:57.540 "name": null, 00:09:57.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.540 "is_configured": false, 00:09:57.540 "data_offset": 0, 00:09:57.540 "data_size": 63488 00:09:57.540 }, 00:09:57.540 { 00:09:57.540 "name": "pt2", 00:09:57.540 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:57.540 "is_configured": true, 00:09:57.540 "data_offset": 2048, 00:09:57.540 "data_size": 63488 00:09:57.540 }, 00:09:57.540 { 00:09:57.540 "name": "pt3", 00:09:57.540 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:57.540 "is_configured": true, 00:09:57.540 "data_offset": 2048, 00:09:57.540 "data_size": 63488 00:09:57.540 } 00:09:57.540 ] 00:09:57.540 }' 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.540 14:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.136 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:58.136 14:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.136 14:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.136 [2024-12-09 14:42:35.979335] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:58.136 [2024-12-09 14:42:35.979429] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:58.136 [2024-12-09 14:42:35.979537] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:58.136 [2024-12-09 14:42:35.979647] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:58.136 [2024-12-09 14:42:35.979703] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:58.136 14:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.136 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.137 14:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:58.137 14:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.137 14:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.137 14:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.137 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:58.137 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:58.137 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:58.137 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:58.137 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:58.137 14:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.137 14:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.137 14:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.137 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:58.137 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:58.137 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:09:58.137 14:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.137 14:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.137 14:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.137 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:58.137 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:58.137 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:58.137 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:58.137 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:58.137 14:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.137 14:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.137 [2024-12-09 14:42:36.067185] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:58.137 [2024-12-09 14:42:36.067326] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.137 [2024-12-09 14:42:36.067369] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:58.137 [2024-12-09 14:42:36.067416] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.137 [2024-12-09 14:42:36.069994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.137 [2024-12-09 14:42:36.070096] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:58.137 [2024-12-09 14:42:36.070266] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:58.137 [2024-12-09 14:42:36.070362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:58.137 pt2 00:09:58.137 14:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.137 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:58.137 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:58.137 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.137 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.137 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.137 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:58.137 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.137 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.137 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.137 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.137 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.137 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:58.137 14:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.137 14:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.137 14:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.137 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.137 "name": "raid_bdev1", 00:09:58.137 "uuid": "78d24403-f78d-473b-8a94-c44d87602e08", 00:09:58.137 "strip_size_kb": 0, 00:09:58.137 "state": "configuring", 00:09:58.137 "raid_level": "raid1", 00:09:58.137 "superblock": true, 00:09:58.137 "num_base_bdevs": 3, 00:09:58.137 "num_base_bdevs_discovered": 1, 00:09:58.137 "num_base_bdevs_operational": 2, 00:09:58.137 "base_bdevs_list": [ 00:09:58.137 { 00:09:58.137 "name": null, 00:09:58.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.137 "is_configured": false, 00:09:58.137 "data_offset": 2048, 00:09:58.137 "data_size": 63488 00:09:58.137 }, 00:09:58.137 { 00:09:58.137 "name": "pt2", 00:09:58.137 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:58.137 "is_configured": true, 00:09:58.137 "data_offset": 2048, 00:09:58.137 "data_size": 63488 00:09:58.137 }, 00:09:58.137 { 00:09:58.137 "name": null, 00:09:58.137 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:58.137 "is_configured": false, 00:09:58.137 "data_offset": 2048, 00:09:58.137 "data_size": 63488 00:09:58.137 } 00:09:58.137 ] 00:09:58.137 }' 00:09:58.137 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.137 14:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.707 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:58.707 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:58.707 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:09:58.707 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:58.707 14:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.707 14:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.707 [2024-12-09 14:42:36.542369] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:58.707 [2024-12-09 14:42:36.542499] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.707 [2024-12-09 14:42:36.542526] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:58.707 [2024-12-09 14:42:36.542540] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.707 [2024-12-09 14:42:36.543075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.707 [2024-12-09 14:42:36.543100] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:58.707 [2024-12-09 14:42:36.543201] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:58.707 [2024-12-09 14:42:36.543233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:58.707 [2024-12-09 14:42:36.543381] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:58.707 [2024-12-09 14:42:36.543394] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:58.707 [2024-12-09 14:42:36.543691] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:58.707 [2024-12-09 14:42:36.543863] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:58.707 [2024-12-09 14:42:36.543874] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:58.707 [2024-12-09 14:42:36.544036] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:58.707 pt3 00:09:58.707 14:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.707 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:58.707 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:58.707 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:58.707 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.707 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.707 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:58.707 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.707 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.707 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.707 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.707 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.707 14:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.707 14:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.707 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:58.707 14:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.707 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.707 "name": "raid_bdev1", 00:09:58.707 "uuid": "78d24403-f78d-473b-8a94-c44d87602e08", 00:09:58.707 "strip_size_kb": 0, 00:09:58.707 "state": "online", 00:09:58.707 "raid_level": "raid1", 00:09:58.707 "superblock": true, 00:09:58.707 "num_base_bdevs": 3, 00:09:58.707 "num_base_bdevs_discovered": 2, 00:09:58.707 "num_base_bdevs_operational": 2, 00:09:58.707 "base_bdevs_list": [ 00:09:58.707 { 00:09:58.707 "name": null, 00:09:58.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.707 "is_configured": false, 00:09:58.707 "data_offset": 2048, 00:09:58.707 "data_size": 63488 00:09:58.707 }, 00:09:58.707 { 00:09:58.707 "name": "pt2", 00:09:58.707 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:58.707 "is_configured": true, 00:09:58.707 "data_offset": 2048, 00:09:58.707 "data_size": 63488 00:09:58.707 }, 00:09:58.707 { 00:09:58.707 "name": "pt3", 00:09:58.707 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:58.707 "is_configured": true, 00:09:58.707 "data_offset": 2048, 00:09:58.707 "data_size": 63488 00:09:58.707 } 00:09:58.707 ] 00:09:58.707 }' 00:09:58.707 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.707 14:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.967 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:58.967 14:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.967 14:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.967 [2024-12-09 14:42:36.993622] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:58.967 [2024-12-09 14:42:36.993713] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:58.967 [2024-12-09 14:42:36.993844] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:58.967 [2024-12-09 14:42:36.993946] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:58.967 [2024-12-09 14:42:36.993991] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:58.967 14:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.967 14:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.967 14:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:58.967 14:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.967 14:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.967 14:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.967 14:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:58.967 14:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:58.967 14:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:09:58.967 14:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:09:58.967 14:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:09:58.967 14:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.967 14:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.967 14:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.967 14:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:58.967 14:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.967 14:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.967 [2024-12-09 14:42:37.073482] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:58.967 [2024-12-09 14:42:37.073593] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.967 [2024-12-09 14:42:37.073632] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:58.967 [2024-12-09 14:42:37.073679] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.967 [2024-12-09 14:42:37.076136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.967 [2024-12-09 14:42:37.076208] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:58.967 [2024-12-09 14:42:37.076328] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:58.967 [2024-12-09 14:42:37.076427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:58.967 [2024-12-09 14:42:37.076650] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:58.967 [2024-12-09 14:42:37.076711] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:58.967 [2024-12-09 14:42:37.076752] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:09:58.967 [2024-12-09 14:42:37.076894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:58.967 pt1 00:09:58.967 14:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.967 14:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:09:58.967 14:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:58.967 14:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:58.967 14:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.967 14:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.967 14:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.967 14:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:58.967 14:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.967 14:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.967 14:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.967 14:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.967 14:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:58.967 14:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.226 14:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.226 14:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.226 14:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.226 14:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.226 "name": "raid_bdev1", 00:09:59.226 "uuid": "78d24403-f78d-473b-8a94-c44d87602e08", 00:09:59.226 "strip_size_kb": 0, 00:09:59.226 "state": "configuring", 00:09:59.226 "raid_level": "raid1", 00:09:59.226 "superblock": true, 00:09:59.226 "num_base_bdevs": 3, 00:09:59.226 "num_base_bdevs_discovered": 1, 00:09:59.226 "num_base_bdevs_operational": 2, 00:09:59.226 "base_bdevs_list": [ 00:09:59.226 { 00:09:59.226 "name": null, 00:09:59.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.226 "is_configured": false, 00:09:59.226 "data_offset": 2048, 00:09:59.226 "data_size": 63488 00:09:59.226 }, 00:09:59.226 { 00:09:59.226 "name": "pt2", 00:09:59.226 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:59.226 "is_configured": true, 00:09:59.226 "data_offset": 2048, 00:09:59.226 "data_size": 63488 00:09:59.226 }, 00:09:59.226 { 00:09:59.226 "name": null, 00:09:59.226 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:59.226 "is_configured": false, 00:09:59.226 "data_offset": 2048, 00:09:59.226 "data_size": 63488 00:09:59.226 } 00:09:59.226 ] 00:09:59.226 }' 00:09:59.226 14:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.226 14:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.486 14:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:09:59.486 14:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:59.486 14:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.486 14:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.486 14:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.486 14:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:09:59.486 14:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:59.486 14:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.486 14:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.486 [2024-12-09 14:42:37.572734] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:59.486 [2024-12-09 14:42:37.572815] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.486 [2024-12-09 14:42:37.572842] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:09:59.486 [2024-12-09 14:42:37.572852] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.486 [2024-12-09 14:42:37.573386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.486 [2024-12-09 14:42:37.573407] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:59.486 [2024-12-09 14:42:37.573515] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:59.486 [2024-12-09 14:42:37.573540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:59.486 [2024-12-09 14:42:37.573690] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:09:59.486 [2024-12-09 14:42:37.573700] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:59.486 [2024-12-09 14:42:37.573946] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:59.486 [2024-12-09 14:42:37.574135] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:09:59.486 [2024-12-09 14:42:37.574152] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:09:59.486 [2024-12-09 14:42:37.574305] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:59.486 pt3 00:09:59.486 14:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.486 14:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:59.486 14:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:59.486 14:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:59.486 14:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.486 14:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.486 14:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:59.486 14:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.486 14:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.486 14:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.486 14:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.486 14:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.486 14:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.486 14:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.486 14:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:59.486 14:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.746 14:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.746 "name": "raid_bdev1", 00:09:59.746 "uuid": "78d24403-f78d-473b-8a94-c44d87602e08", 00:09:59.746 "strip_size_kb": 0, 00:09:59.746 "state": "online", 00:09:59.746 "raid_level": "raid1", 00:09:59.746 "superblock": true, 00:09:59.746 "num_base_bdevs": 3, 00:09:59.746 "num_base_bdevs_discovered": 2, 00:09:59.746 "num_base_bdevs_operational": 2, 00:09:59.746 "base_bdevs_list": [ 00:09:59.746 { 00:09:59.746 "name": null, 00:09:59.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.746 "is_configured": false, 00:09:59.746 "data_offset": 2048, 00:09:59.746 "data_size": 63488 00:09:59.746 }, 00:09:59.746 { 00:09:59.746 "name": "pt2", 00:09:59.746 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:59.746 "is_configured": true, 00:09:59.746 "data_offset": 2048, 00:09:59.746 "data_size": 63488 00:09:59.746 }, 00:09:59.746 { 00:09:59.746 "name": "pt3", 00:09:59.746 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:59.746 "is_configured": true, 00:09:59.746 "data_offset": 2048, 00:09:59.746 "data_size": 63488 00:09:59.746 } 00:09:59.746 ] 00:09:59.746 }' 00:09:59.746 14:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.746 14:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.007 14:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:00.007 14:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.007 14:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.007 14:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:00.007 14:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.007 14:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:00.007 14:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:00.007 14:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.007 14:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.007 14:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:00.007 [2024-12-09 14:42:38.092059] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:00.007 14:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.267 14:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 78d24403-f78d-473b-8a94-c44d87602e08 '!=' 78d24403-f78d-473b-8a94-c44d87602e08 ']' 00:10:00.267 14:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 69917 00:10:00.267 14:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 69917 ']' 00:10:00.267 14:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 69917 00:10:00.267 14:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:00.267 14:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:00.267 14:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69917 00:10:00.267 14:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:00.267 14:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:00.267 14:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69917' 00:10:00.267 killing process with pid 69917 00:10:00.267 14:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 69917 00:10:00.267 [2024-12-09 14:42:38.177709] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:00.267 [2024-12-09 14:42:38.177873] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:00.267 [2024-12-09 14:42:38.177970] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:00.267 [2024-12-09 14:42:38.178021] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, sta 14:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 69917 00:10:00.267 te offline 00:10:00.526 [2024-12-09 14:42:38.476870] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:01.908 14:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:01.908 00:10:01.908 real 0m7.849s 00:10:01.908 user 0m12.302s 00:10:01.908 sys 0m1.404s 00:10:01.908 14:42:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.908 14:42:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.908 ************************************ 00:10:01.908 END TEST raid_superblock_test 00:10:01.908 ************************************ 00:10:01.908 14:42:39 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:10:01.908 14:42:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:01.908 14:42:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.908 14:42:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:01.908 ************************************ 00:10:01.908 START TEST raid_read_error_test 00:10:01.908 ************************************ 00:10:01.908 14:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:10:01.908 14:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:01.908 14:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:01.908 14:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:01.908 14:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:01.908 14:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:01.908 14:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:01.908 14:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:01.908 14:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:01.908 14:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:01.908 14:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:01.908 14:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:01.908 14:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:01.908 14:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:01.908 14:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:01.908 14:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:01.908 14:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:01.908 14:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:01.908 14:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:01.908 14:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:01.908 14:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:01.908 14:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:01.908 14:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:01.908 14:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:01.908 14:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:01.908 14:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.nPhhkXM7BO 00:10:01.908 14:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70363 00:10:01.908 14:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:01.908 14:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70363 00:10:01.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.908 14:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 70363 ']' 00:10:01.908 14:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.908 14:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:01.908 14:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.908 14:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:01.908 14:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.908 [2024-12-09 14:42:39.775813] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:10:01.908 [2024-12-09 14:42:39.775937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70363 ] 00:10:01.908 [2024-12-09 14:42:39.950037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.168 [2024-12-09 14:42:40.066630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.168 [2024-12-09 14:42:40.268335] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.168 [2024-12-09 14:42:40.268405] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.737 BaseBdev1_malloc 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.737 true 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.737 [2024-12-09 14:42:40.679760] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:02.737 [2024-12-09 14:42:40.679901] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.737 [2024-12-09 14:42:40.679932] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:02.737 [2024-12-09 14:42:40.679945] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.737 [2024-12-09 14:42:40.682221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.737 [2024-12-09 14:42:40.682271] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:02.737 BaseBdev1 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.737 BaseBdev2_malloc 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.737 true 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.737 [2024-12-09 14:42:40.746300] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:02.737 [2024-12-09 14:42:40.746423] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.737 [2024-12-09 14:42:40.746448] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:02.737 [2024-12-09 14:42:40.746459] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.737 [2024-12-09 14:42:40.748660] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.737 [2024-12-09 14:42:40.748701] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:02.737 BaseBdev2 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.737 BaseBdev3_malloc 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.737 true 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.737 [2024-12-09 14:42:40.823036] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:02.737 [2024-12-09 14:42:40.823097] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.737 [2024-12-09 14:42:40.823118] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:02.737 [2024-12-09 14:42:40.823129] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.737 [2024-12-09 14:42:40.825312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.737 [2024-12-09 14:42:40.825405] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:02.737 BaseBdev3 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.737 14:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.737 [2024-12-09 14:42:40.835089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:02.737 [2024-12-09 14:42:40.836971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:02.738 [2024-12-09 14:42:40.837047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:02.738 [2024-12-09 14:42:40.837263] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:02.738 [2024-12-09 14:42:40.837276] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:02.738 [2024-12-09 14:42:40.837531] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:02.738 [2024-12-09 14:42:40.837729] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:02.738 [2024-12-09 14:42:40.837747] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:02.738 [2024-12-09 14:42:40.837919] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:02.738 14:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.738 14:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:02.738 14:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:02.738 14:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:02.738 14:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.738 14:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.738 14:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.738 14:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.738 14:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.738 14:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.738 14:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.738 14:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.738 14:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:02.738 14:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.738 14:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.998 14:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.998 14:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.998 "name": "raid_bdev1", 00:10:02.998 "uuid": "fb9e3ec0-d609-444c-b295-27a36a1496eb", 00:10:02.998 "strip_size_kb": 0, 00:10:02.998 "state": "online", 00:10:02.998 "raid_level": "raid1", 00:10:02.998 "superblock": true, 00:10:02.998 "num_base_bdevs": 3, 00:10:02.998 "num_base_bdevs_discovered": 3, 00:10:02.998 "num_base_bdevs_operational": 3, 00:10:02.998 "base_bdevs_list": [ 00:10:02.998 { 00:10:02.998 "name": "BaseBdev1", 00:10:02.998 "uuid": "5cb12db8-3298-5c95-8fa2-e1fb49e1c231", 00:10:02.998 "is_configured": true, 00:10:02.998 "data_offset": 2048, 00:10:02.998 "data_size": 63488 00:10:02.998 }, 00:10:02.998 { 00:10:02.998 "name": "BaseBdev2", 00:10:02.998 "uuid": "219dc918-2e25-564b-8cc6-bf0638dac7a3", 00:10:02.998 "is_configured": true, 00:10:02.998 "data_offset": 2048, 00:10:02.998 "data_size": 63488 00:10:02.998 }, 00:10:02.998 { 00:10:02.998 "name": "BaseBdev3", 00:10:02.998 "uuid": "3efcf4f0-c8f5-53fc-9fad-bf1e23a05563", 00:10:02.998 "is_configured": true, 00:10:02.998 "data_offset": 2048, 00:10:02.998 "data_size": 63488 00:10:02.998 } 00:10:02.998 ] 00:10:02.998 }' 00:10:02.998 14:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.998 14:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.257 14:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:03.257 14:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:03.257 [2024-12-09 14:42:41.311707] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:04.196 14:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:04.196 14:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.196 14:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.196 14:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.196 14:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:04.196 14:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:04.196 14:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:04.196 14:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:04.196 14:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:04.196 14:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:04.196 14:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:04.196 14:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.196 14:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.196 14:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.196 14:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.196 14:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.196 14:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.196 14:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.196 14:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.196 14:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:04.196 14:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.196 14:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.196 14:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.196 14:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.196 "name": "raid_bdev1", 00:10:04.196 "uuid": "fb9e3ec0-d609-444c-b295-27a36a1496eb", 00:10:04.196 "strip_size_kb": 0, 00:10:04.196 "state": "online", 00:10:04.196 "raid_level": "raid1", 00:10:04.196 "superblock": true, 00:10:04.196 "num_base_bdevs": 3, 00:10:04.196 "num_base_bdevs_discovered": 3, 00:10:04.196 "num_base_bdevs_operational": 3, 00:10:04.196 "base_bdevs_list": [ 00:10:04.196 { 00:10:04.196 "name": "BaseBdev1", 00:10:04.196 "uuid": "5cb12db8-3298-5c95-8fa2-e1fb49e1c231", 00:10:04.196 "is_configured": true, 00:10:04.196 "data_offset": 2048, 00:10:04.196 "data_size": 63488 00:10:04.196 }, 00:10:04.196 { 00:10:04.196 "name": "BaseBdev2", 00:10:04.196 "uuid": "219dc918-2e25-564b-8cc6-bf0638dac7a3", 00:10:04.196 "is_configured": true, 00:10:04.196 "data_offset": 2048, 00:10:04.196 "data_size": 63488 00:10:04.196 }, 00:10:04.196 { 00:10:04.196 "name": "BaseBdev3", 00:10:04.196 "uuid": "3efcf4f0-c8f5-53fc-9fad-bf1e23a05563", 00:10:04.196 "is_configured": true, 00:10:04.196 "data_offset": 2048, 00:10:04.196 "data_size": 63488 00:10:04.196 } 00:10:04.196 ] 00:10:04.196 }' 00:10:04.196 14:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.196 14:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.765 14:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:04.765 14:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.765 14:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.765 [2024-12-09 14:42:42.711543] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:04.766 [2024-12-09 14:42:42.711591] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:04.766 [2024-12-09 14:42:42.714741] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:04.766 [2024-12-09 14:42:42.714898] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.766 [2024-12-09 14:42:42.715054] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:04.766 [2024-12-09 14:42:42.715069] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:04.766 { 00:10:04.766 "results": [ 00:10:04.766 { 00:10:04.766 "job": "raid_bdev1", 00:10:04.766 "core_mask": "0x1", 00:10:04.766 "workload": "randrw", 00:10:04.766 "percentage": 50, 00:10:04.766 "status": "finished", 00:10:04.766 "queue_depth": 1, 00:10:04.766 "io_size": 131072, 00:10:04.766 "runtime": 1.400574, 00:10:04.766 "iops": 12509.156959932143, 00:10:04.766 "mibps": 1563.6446199915179, 00:10:04.766 "io_failed": 0, 00:10:04.766 "io_timeout": 0, 00:10:04.766 "avg_latency_us": 77.09588083986363, 00:10:04.766 "min_latency_us": 24.482096069868994, 00:10:04.766 "max_latency_us": 1438.071615720524 00:10:04.766 } 00:10:04.766 ], 00:10:04.766 "core_count": 1 00:10:04.766 } 00:10:04.766 14:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.766 14:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70363 00:10:04.766 14:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 70363 ']' 00:10:04.766 14:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 70363 00:10:04.766 14:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:04.766 14:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:04.766 14:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70363 00:10:04.766 14:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:04.766 14:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:04.766 14:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70363' 00:10:04.766 killing process with pid 70363 00:10:04.766 14:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 70363 00:10:04.766 [2024-12-09 14:42:42.758303] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:04.766 14:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 70363 00:10:05.026 [2024-12-09 14:42:42.989175] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:06.408 14:42:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:06.408 14:42:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.nPhhkXM7BO 00:10:06.408 14:42:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:06.408 14:42:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:06.408 14:42:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:06.408 14:42:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:06.408 14:42:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:06.408 ************************************ 00:10:06.408 END TEST raid_read_error_test 00:10:06.408 ************************************ 00:10:06.408 14:42:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:06.408 00:10:06.408 real 0m4.544s 00:10:06.408 user 0m5.375s 00:10:06.408 sys 0m0.545s 00:10:06.408 14:42:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.408 14:42:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.408 14:42:44 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:10:06.408 14:42:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:06.408 14:42:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.408 14:42:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:06.408 ************************************ 00:10:06.408 START TEST raid_write_error_test 00:10:06.408 ************************************ 00:10:06.408 14:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:10:06.408 14:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:06.408 14:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:06.408 14:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:06.408 14:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:06.408 14:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:06.408 14:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:06.408 14:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:06.408 14:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:06.408 14:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:06.408 14:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:06.408 14:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:06.408 14:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:06.408 14:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:06.408 14:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:06.408 14:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:06.408 14:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:06.408 14:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:06.408 14:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:06.408 14:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:06.408 14:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:06.408 14:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:06.408 14:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:06.408 14:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:06.408 14:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:06.408 14:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.WHmi2y5r2J 00:10:06.408 14:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70513 00:10:06.408 14:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:06.408 14:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70513 00:10:06.408 14:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 70513 ']' 00:10:06.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.408 14:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.408 14:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:06.408 14:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.408 14:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:06.408 14:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.408 [2024-12-09 14:42:44.395108] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:10:06.408 [2024-12-09 14:42:44.395230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70513 ] 00:10:06.668 [2024-12-09 14:42:44.557590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.668 [2024-12-09 14:42:44.678134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.927 [2024-12-09 14:42:44.890056] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:06.927 [2024-12-09 14:42:44.890107] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:07.184 14:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:07.184 14:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:07.184 14:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:07.184 14:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:07.184 14:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.184 14:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.184 BaseBdev1_malloc 00:10:07.184 14:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.184 14:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:07.184 14:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.184 14:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.184 true 00:10:07.184 14:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.184 14:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:07.184 14:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.184 14:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.184 [2024-12-09 14:42:45.295347] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:07.184 [2024-12-09 14:42:45.295408] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:07.184 [2024-12-09 14:42:45.295429] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:07.184 [2024-12-09 14:42:45.295441] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:07.184 [2024-12-09 14:42:45.297556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:07.184 [2024-12-09 14:42:45.297666] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:07.184 BaseBdev1 00:10:07.184 14:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.184 14:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:07.185 14:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:07.185 14:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.185 14:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.443 BaseBdev2_malloc 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.443 true 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.443 [2024-12-09 14:42:45.360947] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:07.443 [2024-12-09 14:42:45.361076] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:07.443 [2024-12-09 14:42:45.361101] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:07.443 [2024-12-09 14:42:45.361113] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:07.443 [2024-12-09 14:42:45.363295] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:07.443 [2024-12-09 14:42:45.363341] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:07.443 BaseBdev2 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.443 BaseBdev3_malloc 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.443 true 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.443 [2024-12-09 14:42:45.437652] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:07.443 [2024-12-09 14:42:45.437758] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:07.443 [2024-12-09 14:42:45.437793] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:07.443 [2024-12-09 14:42:45.437823] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:07.443 [2024-12-09 14:42:45.440134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:07.443 [2024-12-09 14:42:45.440252] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:07.443 BaseBdev3 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.443 [2024-12-09 14:42:45.449752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:07.443 [2024-12-09 14:42:45.451931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:07.443 [2024-12-09 14:42:45.452071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:07.443 [2024-12-09 14:42:45.452357] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:07.443 [2024-12-09 14:42:45.452412] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:07.443 [2024-12-09 14:42:45.452759] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:07.443 [2024-12-09 14:42:45.452995] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:07.443 [2024-12-09 14:42:45.453045] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:07.443 [2024-12-09 14:42:45.453274] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.443 "name": "raid_bdev1", 00:10:07.443 "uuid": "978c2ae4-2979-45e1-b408-3b958a5c3cdf", 00:10:07.443 "strip_size_kb": 0, 00:10:07.443 "state": "online", 00:10:07.443 "raid_level": "raid1", 00:10:07.443 "superblock": true, 00:10:07.443 "num_base_bdevs": 3, 00:10:07.443 "num_base_bdevs_discovered": 3, 00:10:07.443 "num_base_bdevs_operational": 3, 00:10:07.443 "base_bdevs_list": [ 00:10:07.443 { 00:10:07.443 "name": "BaseBdev1", 00:10:07.443 "uuid": "bdea669d-fe34-5d2a-96f5-af3b52aeecbd", 00:10:07.443 "is_configured": true, 00:10:07.443 "data_offset": 2048, 00:10:07.443 "data_size": 63488 00:10:07.443 }, 00:10:07.443 { 00:10:07.443 "name": "BaseBdev2", 00:10:07.443 "uuid": "84073a4f-0b24-5a1d-91b3-077df479505d", 00:10:07.443 "is_configured": true, 00:10:07.443 "data_offset": 2048, 00:10:07.443 "data_size": 63488 00:10:07.443 }, 00:10:07.443 { 00:10:07.443 "name": "BaseBdev3", 00:10:07.443 "uuid": "fa9d43e5-b2ed-5675-89cf-28b9860a83e9", 00:10:07.443 "is_configured": true, 00:10:07.443 "data_offset": 2048, 00:10:07.443 "data_size": 63488 00:10:07.443 } 00:10:07.443 ] 00:10:07.443 }' 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.443 14:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.012 14:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:08.012 14:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:08.012 [2024-12-09 14:42:46.029849] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:08.957 14:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:08.957 14:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.957 14:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.957 [2024-12-09 14:42:46.945730] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:08.957 [2024-12-09 14:42:46.945790] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:08.957 [2024-12-09 14:42:46.946011] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:10:08.957 14:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.957 14:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:08.957 14:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:08.958 14:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:08.958 14:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:08.958 14:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:08.958 14:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:08.958 14:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:08.958 14:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.958 14:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.958 14:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:08.958 14:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.958 14:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.958 14:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.958 14:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.958 14:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.958 14:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:08.958 14:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.958 14:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.958 14:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.958 14:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.958 "name": "raid_bdev1", 00:10:08.958 "uuid": "978c2ae4-2979-45e1-b408-3b958a5c3cdf", 00:10:08.958 "strip_size_kb": 0, 00:10:08.958 "state": "online", 00:10:08.958 "raid_level": "raid1", 00:10:08.958 "superblock": true, 00:10:08.958 "num_base_bdevs": 3, 00:10:08.958 "num_base_bdevs_discovered": 2, 00:10:08.958 "num_base_bdevs_operational": 2, 00:10:08.958 "base_bdevs_list": [ 00:10:08.958 { 00:10:08.958 "name": null, 00:10:08.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.958 "is_configured": false, 00:10:08.958 "data_offset": 0, 00:10:08.958 "data_size": 63488 00:10:08.958 }, 00:10:08.958 { 00:10:08.958 "name": "BaseBdev2", 00:10:08.958 "uuid": "84073a4f-0b24-5a1d-91b3-077df479505d", 00:10:08.958 "is_configured": true, 00:10:08.958 "data_offset": 2048, 00:10:08.958 "data_size": 63488 00:10:08.958 }, 00:10:08.958 { 00:10:08.958 "name": "BaseBdev3", 00:10:08.958 "uuid": "fa9d43e5-b2ed-5675-89cf-28b9860a83e9", 00:10:08.958 "is_configured": true, 00:10:08.958 "data_offset": 2048, 00:10:08.958 "data_size": 63488 00:10:08.958 } 00:10:08.958 ] 00:10:08.958 }' 00:10:08.958 14:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.958 14:42:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.528 14:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:09.528 14:42:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.529 14:42:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.529 [2024-12-09 14:42:47.432424] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:09.529 [2024-12-09 14:42:47.432522] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:09.529 [2024-12-09 14:42:47.435717] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:09.529 [2024-12-09 14:42:47.435845] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:09.529 [2024-12-09 14:42:47.435972] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:09.529 [2024-12-09 14:42:47.436031] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:09.529 { 00:10:09.529 "results": [ 00:10:09.529 { 00:10:09.529 "job": "raid_bdev1", 00:10:09.529 "core_mask": "0x1", 00:10:09.529 "workload": "randrw", 00:10:09.529 "percentage": 50, 00:10:09.529 "status": "finished", 00:10:09.529 "queue_depth": 1, 00:10:09.529 "io_size": 131072, 00:10:09.529 "runtime": 1.403399, 00:10:09.529 "iops": 13795.791503342954, 00:10:09.529 "mibps": 1724.4739379178693, 00:10:09.529 "io_failed": 0, 00:10:09.529 "io_timeout": 0, 00:10:09.529 "avg_latency_us": 69.60973730785948, 00:10:09.529 "min_latency_us": 24.817467248908297, 00:10:09.529 "max_latency_us": 1337.907423580786 00:10:09.529 } 00:10:09.529 ], 00:10:09.529 "core_count": 1 00:10:09.529 } 00:10:09.529 14:42:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.529 14:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70513 00:10:09.529 14:42:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 70513 ']' 00:10:09.529 14:42:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 70513 00:10:09.529 14:42:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:09.529 14:42:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:09.529 14:42:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70513 00:10:09.529 14:42:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:09.529 14:42:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:09.529 14:42:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70513' 00:10:09.529 killing process with pid 70513 00:10:09.529 14:42:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 70513 00:10:09.529 [2024-12-09 14:42:47.490982] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:09.529 14:42:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 70513 00:10:09.788 [2024-12-09 14:42:47.742933] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:11.167 14:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.WHmi2y5r2J 00:10:11.167 14:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:11.167 14:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:11.167 14:42:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:11.167 14:42:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:11.167 ************************************ 00:10:11.167 END TEST raid_write_error_test 00:10:11.167 ************************************ 00:10:11.167 14:42:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:11.167 14:42:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:11.167 14:42:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:11.167 00:10:11.167 real 0m4.723s 00:10:11.167 user 0m5.663s 00:10:11.167 sys 0m0.567s 00:10:11.167 14:42:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.167 14:42:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.167 14:42:49 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:11.167 14:42:49 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:11.167 14:42:49 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:10:11.167 14:42:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:11.167 14:42:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.167 14:42:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:11.167 ************************************ 00:10:11.167 START TEST raid_state_function_test 00:10:11.167 ************************************ 00:10:11.167 14:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:10:11.167 14:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:11.167 14:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:11.167 14:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:11.167 14:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:11.167 14:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:11.167 14:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:11.167 14:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:11.167 14:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:11.167 14:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:11.167 14:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:11.167 14:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:11.167 14:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:11.167 14:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:11.167 14:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:11.167 14:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:11.167 14:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:11.167 14:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:11.167 14:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:11.167 14:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:11.167 14:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:11.167 14:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:11.167 14:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:11.167 14:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:11.167 14:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:11.167 14:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:11.167 14:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:11.167 14:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:11.167 14:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:11.167 14:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:11.167 14:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=70652 00:10:11.167 14:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:11.167 14:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70652' 00:10:11.167 Process raid pid: 70652 00:10:11.167 14:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 70652 00:10:11.167 14:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 70652 ']' 00:10:11.167 14:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.167 14:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:11.167 14:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.167 14:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:11.167 14:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.167 [2024-12-09 14:42:49.176695] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:10:11.167 [2024-12-09 14:42:49.176912] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:11.426 [2024-12-09 14:42:49.355858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.426 [2024-12-09 14:42:49.481448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.686 [2024-12-09 14:42:49.697982] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:11.686 [2024-12-09 14:42:49.698122] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:11.946 14:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:11.946 14:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:11.946 14:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:11.946 14:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.946 14:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.946 [2024-12-09 14:42:50.029869] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:11.946 [2024-12-09 14:42:50.029931] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:11.946 [2024-12-09 14:42:50.029942] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:11.946 [2024-12-09 14:42:50.029952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:11.946 [2024-12-09 14:42:50.029959] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:11.946 [2024-12-09 14:42:50.029968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:11.946 [2024-12-09 14:42:50.029974] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:11.946 [2024-12-09 14:42:50.029983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:11.946 14:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.946 14:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:11.946 14:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.946 14:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.946 14:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:11.946 14:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.946 14:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.946 14:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.946 14:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.946 14:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.946 14:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.946 14:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.946 14:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.946 14:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.946 14:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.946 14:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.205 14:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.205 "name": "Existed_Raid", 00:10:12.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.205 "strip_size_kb": 64, 00:10:12.205 "state": "configuring", 00:10:12.205 "raid_level": "raid0", 00:10:12.205 "superblock": false, 00:10:12.205 "num_base_bdevs": 4, 00:10:12.205 "num_base_bdevs_discovered": 0, 00:10:12.205 "num_base_bdevs_operational": 4, 00:10:12.205 "base_bdevs_list": [ 00:10:12.205 { 00:10:12.205 "name": "BaseBdev1", 00:10:12.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.205 "is_configured": false, 00:10:12.205 "data_offset": 0, 00:10:12.205 "data_size": 0 00:10:12.205 }, 00:10:12.205 { 00:10:12.205 "name": "BaseBdev2", 00:10:12.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.205 "is_configured": false, 00:10:12.205 "data_offset": 0, 00:10:12.205 "data_size": 0 00:10:12.205 }, 00:10:12.205 { 00:10:12.205 "name": "BaseBdev3", 00:10:12.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.205 "is_configured": false, 00:10:12.205 "data_offset": 0, 00:10:12.205 "data_size": 0 00:10:12.205 }, 00:10:12.205 { 00:10:12.205 "name": "BaseBdev4", 00:10:12.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.205 "is_configured": false, 00:10:12.205 "data_offset": 0, 00:10:12.205 "data_size": 0 00:10:12.205 } 00:10:12.205 ] 00:10:12.205 }' 00:10:12.205 14:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.205 14:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.464 14:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:12.464 14:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.464 14:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.464 [2024-12-09 14:42:50.540970] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:12.464 [2024-12-09 14:42:50.541099] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:12.464 14:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.464 14:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:12.464 14:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.464 14:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.464 [2024-12-09 14:42:50.552932] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:12.464 [2024-12-09 14:42:50.553017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:12.464 [2024-12-09 14:42:50.553047] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:12.464 [2024-12-09 14:42:50.553070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:12.464 [2024-12-09 14:42:50.553089] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:12.464 [2024-12-09 14:42:50.553111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:12.464 [2024-12-09 14:42:50.553129] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:12.464 [2024-12-09 14:42:50.553150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:12.464 14:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.464 14:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:12.464 14:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.464 14:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.725 [2024-12-09 14:42:50.603058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:12.725 BaseBdev1 00:10:12.725 14:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.725 14:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:12.725 14:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:12.725 14:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:12.725 14:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:12.725 14:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:12.725 14:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:12.725 14:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:12.725 14:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.725 14:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.725 14:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.725 14:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:12.725 14:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.725 14:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.725 [ 00:10:12.725 { 00:10:12.725 "name": "BaseBdev1", 00:10:12.725 "aliases": [ 00:10:12.725 "18aa8dae-dfee-4977-a473-ec93dbd85f67" 00:10:12.725 ], 00:10:12.725 "product_name": "Malloc disk", 00:10:12.725 "block_size": 512, 00:10:12.725 "num_blocks": 65536, 00:10:12.725 "uuid": "18aa8dae-dfee-4977-a473-ec93dbd85f67", 00:10:12.725 "assigned_rate_limits": { 00:10:12.725 "rw_ios_per_sec": 0, 00:10:12.725 "rw_mbytes_per_sec": 0, 00:10:12.725 "r_mbytes_per_sec": 0, 00:10:12.725 "w_mbytes_per_sec": 0 00:10:12.725 }, 00:10:12.725 "claimed": true, 00:10:12.725 "claim_type": "exclusive_write", 00:10:12.725 "zoned": false, 00:10:12.725 "supported_io_types": { 00:10:12.725 "read": true, 00:10:12.725 "write": true, 00:10:12.725 "unmap": true, 00:10:12.725 "flush": true, 00:10:12.725 "reset": true, 00:10:12.725 "nvme_admin": false, 00:10:12.725 "nvme_io": false, 00:10:12.725 "nvme_io_md": false, 00:10:12.725 "write_zeroes": true, 00:10:12.725 "zcopy": true, 00:10:12.725 "get_zone_info": false, 00:10:12.725 "zone_management": false, 00:10:12.725 "zone_append": false, 00:10:12.725 "compare": false, 00:10:12.725 "compare_and_write": false, 00:10:12.725 "abort": true, 00:10:12.725 "seek_hole": false, 00:10:12.725 "seek_data": false, 00:10:12.725 "copy": true, 00:10:12.725 "nvme_iov_md": false 00:10:12.725 }, 00:10:12.725 "memory_domains": [ 00:10:12.725 { 00:10:12.725 "dma_device_id": "system", 00:10:12.725 "dma_device_type": 1 00:10:12.725 }, 00:10:12.725 { 00:10:12.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.725 "dma_device_type": 2 00:10:12.725 } 00:10:12.725 ], 00:10:12.725 "driver_specific": {} 00:10:12.725 } 00:10:12.725 ] 00:10:12.725 14:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.725 14:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:12.725 14:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:12.725 14:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.725 14:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.725 14:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.725 14:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.725 14:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.725 14:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.725 14:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.725 14:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.725 14:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.725 14:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.725 14:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.725 14:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.725 14:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.725 14:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.725 14:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.725 "name": "Existed_Raid", 00:10:12.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.725 "strip_size_kb": 64, 00:10:12.725 "state": "configuring", 00:10:12.725 "raid_level": "raid0", 00:10:12.725 "superblock": false, 00:10:12.725 "num_base_bdevs": 4, 00:10:12.725 "num_base_bdevs_discovered": 1, 00:10:12.725 "num_base_bdevs_operational": 4, 00:10:12.725 "base_bdevs_list": [ 00:10:12.725 { 00:10:12.725 "name": "BaseBdev1", 00:10:12.725 "uuid": "18aa8dae-dfee-4977-a473-ec93dbd85f67", 00:10:12.725 "is_configured": true, 00:10:12.725 "data_offset": 0, 00:10:12.725 "data_size": 65536 00:10:12.725 }, 00:10:12.725 { 00:10:12.725 "name": "BaseBdev2", 00:10:12.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.725 "is_configured": false, 00:10:12.725 "data_offset": 0, 00:10:12.725 "data_size": 0 00:10:12.725 }, 00:10:12.725 { 00:10:12.725 "name": "BaseBdev3", 00:10:12.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.725 "is_configured": false, 00:10:12.725 "data_offset": 0, 00:10:12.725 "data_size": 0 00:10:12.725 }, 00:10:12.725 { 00:10:12.725 "name": "BaseBdev4", 00:10:12.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.725 "is_configured": false, 00:10:12.725 "data_offset": 0, 00:10:12.725 "data_size": 0 00:10:12.725 } 00:10:12.725 ] 00:10:12.725 }' 00:10:12.725 14:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.725 14:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.295 14:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:13.296 14:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.296 14:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.296 [2024-12-09 14:42:51.118292] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:13.296 [2024-12-09 14:42:51.118406] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:13.296 14:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.296 14:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:13.296 14:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.296 14:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.296 [2024-12-09 14:42:51.130322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:13.296 [2024-12-09 14:42:51.132322] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:13.296 [2024-12-09 14:42:51.132406] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:13.296 [2024-12-09 14:42:51.132437] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:13.296 [2024-12-09 14:42:51.132462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:13.296 [2024-12-09 14:42:51.132482] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:13.296 [2024-12-09 14:42:51.132503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:13.296 14:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.296 14:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:13.296 14:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:13.296 14:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:13.296 14:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.296 14:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.296 14:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.296 14:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.296 14:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.296 14:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.296 14:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.296 14:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.296 14:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.296 14:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.296 14:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.296 14:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.296 14:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.296 14:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.296 14:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.296 "name": "Existed_Raid", 00:10:13.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.296 "strip_size_kb": 64, 00:10:13.296 "state": "configuring", 00:10:13.296 "raid_level": "raid0", 00:10:13.296 "superblock": false, 00:10:13.296 "num_base_bdevs": 4, 00:10:13.296 "num_base_bdevs_discovered": 1, 00:10:13.296 "num_base_bdevs_operational": 4, 00:10:13.296 "base_bdevs_list": [ 00:10:13.296 { 00:10:13.296 "name": "BaseBdev1", 00:10:13.296 "uuid": "18aa8dae-dfee-4977-a473-ec93dbd85f67", 00:10:13.296 "is_configured": true, 00:10:13.296 "data_offset": 0, 00:10:13.296 "data_size": 65536 00:10:13.296 }, 00:10:13.296 { 00:10:13.296 "name": "BaseBdev2", 00:10:13.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.296 "is_configured": false, 00:10:13.296 "data_offset": 0, 00:10:13.296 "data_size": 0 00:10:13.296 }, 00:10:13.296 { 00:10:13.296 "name": "BaseBdev3", 00:10:13.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.296 "is_configured": false, 00:10:13.296 "data_offset": 0, 00:10:13.296 "data_size": 0 00:10:13.296 }, 00:10:13.296 { 00:10:13.296 "name": "BaseBdev4", 00:10:13.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.296 "is_configured": false, 00:10:13.296 "data_offset": 0, 00:10:13.296 "data_size": 0 00:10:13.296 } 00:10:13.296 ] 00:10:13.296 }' 00:10:13.296 14:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.296 14:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.556 14:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:13.556 14:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.556 14:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.556 [2024-12-09 14:42:51.586255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:13.556 BaseBdev2 00:10:13.556 14:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.556 14:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:13.556 14:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:13.556 14:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:13.556 14:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:13.556 14:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:13.556 14:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:13.556 14:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:13.556 14:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.556 14:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.556 14:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.556 14:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:13.556 14:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.556 14:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.556 [ 00:10:13.556 { 00:10:13.556 "name": "BaseBdev2", 00:10:13.556 "aliases": [ 00:10:13.556 "7dffd24b-8605-4708-8486-6f56454c618f" 00:10:13.556 ], 00:10:13.556 "product_name": "Malloc disk", 00:10:13.556 "block_size": 512, 00:10:13.556 "num_blocks": 65536, 00:10:13.556 "uuid": "7dffd24b-8605-4708-8486-6f56454c618f", 00:10:13.556 "assigned_rate_limits": { 00:10:13.556 "rw_ios_per_sec": 0, 00:10:13.556 "rw_mbytes_per_sec": 0, 00:10:13.556 "r_mbytes_per_sec": 0, 00:10:13.556 "w_mbytes_per_sec": 0 00:10:13.556 }, 00:10:13.556 "claimed": true, 00:10:13.556 "claim_type": "exclusive_write", 00:10:13.556 "zoned": false, 00:10:13.556 "supported_io_types": { 00:10:13.556 "read": true, 00:10:13.556 "write": true, 00:10:13.556 "unmap": true, 00:10:13.556 "flush": true, 00:10:13.556 "reset": true, 00:10:13.556 "nvme_admin": false, 00:10:13.556 "nvme_io": false, 00:10:13.556 "nvme_io_md": false, 00:10:13.556 "write_zeroes": true, 00:10:13.556 "zcopy": true, 00:10:13.556 "get_zone_info": false, 00:10:13.556 "zone_management": false, 00:10:13.556 "zone_append": false, 00:10:13.556 "compare": false, 00:10:13.556 "compare_and_write": false, 00:10:13.556 "abort": true, 00:10:13.556 "seek_hole": false, 00:10:13.556 "seek_data": false, 00:10:13.556 "copy": true, 00:10:13.556 "nvme_iov_md": false 00:10:13.556 }, 00:10:13.556 "memory_domains": [ 00:10:13.556 { 00:10:13.556 "dma_device_id": "system", 00:10:13.556 "dma_device_type": 1 00:10:13.556 }, 00:10:13.556 { 00:10:13.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.556 "dma_device_type": 2 00:10:13.556 } 00:10:13.556 ], 00:10:13.556 "driver_specific": {} 00:10:13.556 } 00:10:13.556 ] 00:10:13.556 14:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.556 14:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:13.556 14:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:13.556 14:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:13.556 14:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:13.556 14:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.556 14:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.556 14:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.556 14:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.556 14:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.556 14:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.556 14:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.556 14:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.556 14:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.556 14:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.556 14:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.556 14:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.556 14:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.556 14:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.816 14:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.816 "name": "Existed_Raid", 00:10:13.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.816 "strip_size_kb": 64, 00:10:13.816 "state": "configuring", 00:10:13.816 "raid_level": "raid0", 00:10:13.816 "superblock": false, 00:10:13.816 "num_base_bdevs": 4, 00:10:13.816 "num_base_bdevs_discovered": 2, 00:10:13.816 "num_base_bdevs_operational": 4, 00:10:13.816 "base_bdevs_list": [ 00:10:13.816 { 00:10:13.816 "name": "BaseBdev1", 00:10:13.816 "uuid": "18aa8dae-dfee-4977-a473-ec93dbd85f67", 00:10:13.816 "is_configured": true, 00:10:13.816 "data_offset": 0, 00:10:13.816 "data_size": 65536 00:10:13.816 }, 00:10:13.816 { 00:10:13.816 "name": "BaseBdev2", 00:10:13.816 "uuid": "7dffd24b-8605-4708-8486-6f56454c618f", 00:10:13.816 "is_configured": true, 00:10:13.816 "data_offset": 0, 00:10:13.816 "data_size": 65536 00:10:13.816 }, 00:10:13.816 { 00:10:13.816 "name": "BaseBdev3", 00:10:13.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.816 "is_configured": false, 00:10:13.816 "data_offset": 0, 00:10:13.816 "data_size": 0 00:10:13.816 }, 00:10:13.816 { 00:10:13.816 "name": "BaseBdev4", 00:10:13.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.816 "is_configured": false, 00:10:13.816 "data_offset": 0, 00:10:13.816 "data_size": 0 00:10:13.816 } 00:10:13.816 ] 00:10:13.816 }' 00:10:13.816 14:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.816 14:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.076 14:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:14.076 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.076 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.076 [2024-12-09 14:42:52.105961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:14.076 BaseBdev3 00:10:14.076 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.076 14:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:14.076 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:14.076 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:14.076 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:14.076 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:14.076 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:14.076 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:14.076 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.076 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.076 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.076 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:14.076 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.076 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.076 [ 00:10:14.076 { 00:10:14.076 "name": "BaseBdev3", 00:10:14.076 "aliases": [ 00:10:14.076 "c2c1e48b-1320-4975-b4b7-dd7e86a56455" 00:10:14.076 ], 00:10:14.076 "product_name": "Malloc disk", 00:10:14.076 "block_size": 512, 00:10:14.076 "num_blocks": 65536, 00:10:14.076 "uuid": "c2c1e48b-1320-4975-b4b7-dd7e86a56455", 00:10:14.076 "assigned_rate_limits": { 00:10:14.076 "rw_ios_per_sec": 0, 00:10:14.076 "rw_mbytes_per_sec": 0, 00:10:14.076 "r_mbytes_per_sec": 0, 00:10:14.076 "w_mbytes_per_sec": 0 00:10:14.076 }, 00:10:14.076 "claimed": true, 00:10:14.076 "claim_type": "exclusive_write", 00:10:14.076 "zoned": false, 00:10:14.076 "supported_io_types": { 00:10:14.076 "read": true, 00:10:14.076 "write": true, 00:10:14.076 "unmap": true, 00:10:14.076 "flush": true, 00:10:14.076 "reset": true, 00:10:14.076 "nvme_admin": false, 00:10:14.076 "nvme_io": false, 00:10:14.076 "nvme_io_md": false, 00:10:14.076 "write_zeroes": true, 00:10:14.076 "zcopy": true, 00:10:14.076 "get_zone_info": false, 00:10:14.076 "zone_management": false, 00:10:14.076 "zone_append": false, 00:10:14.076 "compare": false, 00:10:14.076 "compare_and_write": false, 00:10:14.076 "abort": true, 00:10:14.076 "seek_hole": false, 00:10:14.076 "seek_data": false, 00:10:14.076 "copy": true, 00:10:14.076 "nvme_iov_md": false 00:10:14.076 }, 00:10:14.076 "memory_domains": [ 00:10:14.076 { 00:10:14.076 "dma_device_id": "system", 00:10:14.076 "dma_device_type": 1 00:10:14.076 }, 00:10:14.076 { 00:10:14.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.076 "dma_device_type": 2 00:10:14.076 } 00:10:14.076 ], 00:10:14.076 "driver_specific": {} 00:10:14.076 } 00:10:14.076 ] 00:10:14.076 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.076 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:14.076 14:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:14.076 14:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:14.076 14:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:14.076 14:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.076 14:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.076 14:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.076 14:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.076 14:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.076 14:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.076 14:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.076 14:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.076 14:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.076 14:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.076 14:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.076 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.076 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.076 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.336 14:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.336 "name": "Existed_Raid", 00:10:14.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.336 "strip_size_kb": 64, 00:10:14.336 "state": "configuring", 00:10:14.336 "raid_level": "raid0", 00:10:14.336 "superblock": false, 00:10:14.336 "num_base_bdevs": 4, 00:10:14.336 "num_base_bdevs_discovered": 3, 00:10:14.336 "num_base_bdevs_operational": 4, 00:10:14.336 "base_bdevs_list": [ 00:10:14.336 { 00:10:14.336 "name": "BaseBdev1", 00:10:14.336 "uuid": "18aa8dae-dfee-4977-a473-ec93dbd85f67", 00:10:14.336 "is_configured": true, 00:10:14.336 "data_offset": 0, 00:10:14.336 "data_size": 65536 00:10:14.336 }, 00:10:14.336 { 00:10:14.336 "name": "BaseBdev2", 00:10:14.336 "uuid": "7dffd24b-8605-4708-8486-6f56454c618f", 00:10:14.336 "is_configured": true, 00:10:14.336 "data_offset": 0, 00:10:14.336 "data_size": 65536 00:10:14.336 }, 00:10:14.336 { 00:10:14.336 "name": "BaseBdev3", 00:10:14.336 "uuid": "c2c1e48b-1320-4975-b4b7-dd7e86a56455", 00:10:14.336 "is_configured": true, 00:10:14.336 "data_offset": 0, 00:10:14.336 "data_size": 65536 00:10:14.336 }, 00:10:14.336 { 00:10:14.336 "name": "BaseBdev4", 00:10:14.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.336 "is_configured": false, 00:10:14.336 "data_offset": 0, 00:10:14.336 "data_size": 0 00:10:14.336 } 00:10:14.336 ] 00:10:14.336 }' 00:10:14.336 14:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.336 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.596 14:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:14.596 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.596 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.596 [2024-12-09 14:42:52.657008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:14.596 [2024-12-09 14:42:52.657149] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:14.596 [2024-12-09 14:42:52.657164] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:14.596 [2024-12-09 14:42:52.657494] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:14.596 [2024-12-09 14:42:52.657710] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:14.596 [2024-12-09 14:42:52.657726] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:14.596 [2024-12-09 14:42:52.658029] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:14.596 BaseBdev4 00:10:14.596 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.596 14:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:14.596 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:14.596 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:14.596 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:14.597 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:14.597 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:14.597 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:14.597 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.597 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.597 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.597 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:14.597 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.597 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.597 [ 00:10:14.597 { 00:10:14.597 "name": "BaseBdev4", 00:10:14.597 "aliases": [ 00:10:14.597 "bde647c8-c062-46f6-9185-d52024ce9831" 00:10:14.597 ], 00:10:14.597 "product_name": "Malloc disk", 00:10:14.597 "block_size": 512, 00:10:14.597 "num_blocks": 65536, 00:10:14.597 "uuid": "bde647c8-c062-46f6-9185-d52024ce9831", 00:10:14.597 "assigned_rate_limits": { 00:10:14.597 "rw_ios_per_sec": 0, 00:10:14.597 "rw_mbytes_per_sec": 0, 00:10:14.597 "r_mbytes_per_sec": 0, 00:10:14.597 "w_mbytes_per_sec": 0 00:10:14.597 }, 00:10:14.597 "claimed": true, 00:10:14.597 "claim_type": "exclusive_write", 00:10:14.597 "zoned": false, 00:10:14.597 "supported_io_types": { 00:10:14.597 "read": true, 00:10:14.597 "write": true, 00:10:14.597 "unmap": true, 00:10:14.597 "flush": true, 00:10:14.597 "reset": true, 00:10:14.597 "nvme_admin": false, 00:10:14.597 "nvme_io": false, 00:10:14.597 "nvme_io_md": false, 00:10:14.597 "write_zeroes": true, 00:10:14.597 "zcopy": true, 00:10:14.597 "get_zone_info": false, 00:10:14.597 "zone_management": false, 00:10:14.597 "zone_append": false, 00:10:14.597 "compare": false, 00:10:14.597 "compare_and_write": false, 00:10:14.597 "abort": true, 00:10:14.597 "seek_hole": false, 00:10:14.597 "seek_data": false, 00:10:14.597 "copy": true, 00:10:14.597 "nvme_iov_md": false 00:10:14.597 }, 00:10:14.597 "memory_domains": [ 00:10:14.597 { 00:10:14.597 "dma_device_id": "system", 00:10:14.597 "dma_device_type": 1 00:10:14.597 }, 00:10:14.597 { 00:10:14.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.597 "dma_device_type": 2 00:10:14.597 } 00:10:14.597 ], 00:10:14.597 "driver_specific": {} 00:10:14.597 } 00:10:14.597 ] 00:10:14.597 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.597 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:14.597 14:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:14.597 14:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:14.597 14:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:14.597 14:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.597 14:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:14.597 14:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.597 14:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.597 14:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.597 14:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.597 14:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.597 14:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.597 14:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.597 14:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.597 14:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.597 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.597 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.857 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.857 14:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.857 "name": "Existed_Raid", 00:10:14.857 "uuid": "77a72925-df43-4a9e-b019-e8188eb6ce7e", 00:10:14.857 "strip_size_kb": 64, 00:10:14.857 "state": "online", 00:10:14.857 "raid_level": "raid0", 00:10:14.857 "superblock": false, 00:10:14.857 "num_base_bdevs": 4, 00:10:14.857 "num_base_bdevs_discovered": 4, 00:10:14.857 "num_base_bdevs_operational": 4, 00:10:14.857 "base_bdevs_list": [ 00:10:14.857 { 00:10:14.857 "name": "BaseBdev1", 00:10:14.857 "uuid": "18aa8dae-dfee-4977-a473-ec93dbd85f67", 00:10:14.857 "is_configured": true, 00:10:14.857 "data_offset": 0, 00:10:14.857 "data_size": 65536 00:10:14.857 }, 00:10:14.857 { 00:10:14.857 "name": "BaseBdev2", 00:10:14.857 "uuid": "7dffd24b-8605-4708-8486-6f56454c618f", 00:10:14.857 "is_configured": true, 00:10:14.857 "data_offset": 0, 00:10:14.857 "data_size": 65536 00:10:14.857 }, 00:10:14.857 { 00:10:14.857 "name": "BaseBdev3", 00:10:14.857 "uuid": "c2c1e48b-1320-4975-b4b7-dd7e86a56455", 00:10:14.857 "is_configured": true, 00:10:14.857 "data_offset": 0, 00:10:14.857 "data_size": 65536 00:10:14.857 }, 00:10:14.857 { 00:10:14.857 "name": "BaseBdev4", 00:10:14.857 "uuid": "bde647c8-c062-46f6-9185-d52024ce9831", 00:10:14.857 "is_configured": true, 00:10:14.857 "data_offset": 0, 00:10:14.857 "data_size": 65536 00:10:14.857 } 00:10:14.857 ] 00:10:14.857 }' 00:10:14.857 14:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.857 14:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.119 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:15.119 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:15.119 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:15.119 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:15.119 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:15.119 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:15.119 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:15.119 14:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.119 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:15.119 14:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.119 [2024-12-09 14:42:53.128707] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:15.119 14:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.119 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:15.119 "name": "Existed_Raid", 00:10:15.119 "aliases": [ 00:10:15.119 "77a72925-df43-4a9e-b019-e8188eb6ce7e" 00:10:15.119 ], 00:10:15.119 "product_name": "Raid Volume", 00:10:15.119 "block_size": 512, 00:10:15.119 "num_blocks": 262144, 00:10:15.119 "uuid": "77a72925-df43-4a9e-b019-e8188eb6ce7e", 00:10:15.119 "assigned_rate_limits": { 00:10:15.119 "rw_ios_per_sec": 0, 00:10:15.119 "rw_mbytes_per_sec": 0, 00:10:15.119 "r_mbytes_per_sec": 0, 00:10:15.119 "w_mbytes_per_sec": 0 00:10:15.119 }, 00:10:15.119 "claimed": false, 00:10:15.119 "zoned": false, 00:10:15.119 "supported_io_types": { 00:10:15.119 "read": true, 00:10:15.119 "write": true, 00:10:15.119 "unmap": true, 00:10:15.119 "flush": true, 00:10:15.119 "reset": true, 00:10:15.119 "nvme_admin": false, 00:10:15.119 "nvme_io": false, 00:10:15.119 "nvme_io_md": false, 00:10:15.119 "write_zeroes": true, 00:10:15.119 "zcopy": false, 00:10:15.119 "get_zone_info": false, 00:10:15.119 "zone_management": false, 00:10:15.119 "zone_append": false, 00:10:15.119 "compare": false, 00:10:15.119 "compare_and_write": false, 00:10:15.119 "abort": false, 00:10:15.119 "seek_hole": false, 00:10:15.119 "seek_data": false, 00:10:15.119 "copy": false, 00:10:15.119 "nvme_iov_md": false 00:10:15.119 }, 00:10:15.119 "memory_domains": [ 00:10:15.119 { 00:10:15.119 "dma_device_id": "system", 00:10:15.119 "dma_device_type": 1 00:10:15.119 }, 00:10:15.119 { 00:10:15.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.119 "dma_device_type": 2 00:10:15.119 }, 00:10:15.119 { 00:10:15.119 "dma_device_id": "system", 00:10:15.119 "dma_device_type": 1 00:10:15.119 }, 00:10:15.119 { 00:10:15.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.119 "dma_device_type": 2 00:10:15.119 }, 00:10:15.119 { 00:10:15.119 "dma_device_id": "system", 00:10:15.119 "dma_device_type": 1 00:10:15.119 }, 00:10:15.119 { 00:10:15.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.119 "dma_device_type": 2 00:10:15.119 }, 00:10:15.119 { 00:10:15.119 "dma_device_id": "system", 00:10:15.119 "dma_device_type": 1 00:10:15.119 }, 00:10:15.119 { 00:10:15.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.119 "dma_device_type": 2 00:10:15.119 } 00:10:15.119 ], 00:10:15.119 "driver_specific": { 00:10:15.119 "raid": { 00:10:15.119 "uuid": "77a72925-df43-4a9e-b019-e8188eb6ce7e", 00:10:15.119 "strip_size_kb": 64, 00:10:15.119 "state": "online", 00:10:15.119 "raid_level": "raid0", 00:10:15.119 "superblock": false, 00:10:15.119 "num_base_bdevs": 4, 00:10:15.119 "num_base_bdevs_discovered": 4, 00:10:15.120 "num_base_bdevs_operational": 4, 00:10:15.120 "base_bdevs_list": [ 00:10:15.120 { 00:10:15.120 "name": "BaseBdev1", 00:10:15.120 "uuid": "18aa8dae-dfee-4977-a473-ec93dbd85f67", 00:10:15.120 "is_configured": true, 00:10:15.120 "data_offset": 0, 00:10:15.120 "data_size": 65536 00:10:15.120 }, 00:10:15.120 { 00:10:15.120 "name": "BaseBdev2", 00:10:15.120 "uuid": "7dffd24b-8605-4708-8486-6f56454c618f", 00:10:15.120 "is_configured": true, 00:10:15.120 "data_offset": 0, 00:10:15.120 "data_size": 65536 00:10:15.120 }, 00:10:15.120 { 00:10:15.120 "name": "BaseBdev3", 00:10:15.120 "uuid": "c2c1e48b-1320-4975-b4b7-dd7e86a56455", 00:10:15.120 "is_configured": true, 00:10:15.120 "data_offset": 0, 00:10:15.120 "data_size": 65536 00:10:15.120 }, 00:10:15.120 { 00:10:15.120 "name": "BaseBdev4", 00:10:15.120 "uuid": "bde647c8-c062-46f6-9185-d52024ce9831", 00:10:15.120 "is_configured": true, 00:10:15.120 "data_offset": 0, 00:10:15.120 "data_size": 65536 00:10:15.120 } 00:10:15.120 ] 00:10:15.120 } 00:10:15.120 } 00:10:15.120 }' 00:10:15.120 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:15.120 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:15.120 BaseBdev2 00:10:15.120 BaseBdev3 00:10:15.120 BaseBdev4' 00:10:15.120 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.379 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:15.379 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.379 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:15.379 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.379 14:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.379 14:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.379 14:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.379 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.379 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.379 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.379 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.379 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:15.379 14:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.379 14:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.379 14:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.379 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.379 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.379 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.379 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.379 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:15.379 14:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.379 14:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.379 14:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.379 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.379 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.379 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.379 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:15.379 14:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.379 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.379 14:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.379 14:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.379 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.379 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.379 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:15.379 14:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.379 14:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.379 [2024-12-09 14:42:53.419851] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:15.379 [2024-12-09 14:42:53.419923] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:15.379 [2024-12-09 14:42:53.419985] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:15.638 14:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.638 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:15.638 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:15.638 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:15.638 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:15.638 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:15.638 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:15.638 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.638 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:15.638 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.638 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.638 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.638 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.638 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.638 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.638 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.638 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.638 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.638 14:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.638 14:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.639 14:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.639 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.639 "name": "Existed_Raid", 00:10:15.639 "uuid": "77a72925-df43-4a9e-b019-e8188eb6ce7e", 00:10:15.639 "strip_size_kb": 64, 00:10:15.639 "state": "offline", 00:10:15.639 "raid_level": "raid0", 00:10:15.639 "superblock": false, 00:10:15.639 "num_base_bdevs": 4, 00:10:15.639 "num_base_bdevs_discovered": 3, 00:10:15.639 "num_base_bdevs_operational": 3, 00:10:15.639 "base_bdevs_list": [ 00:10:15.639 { 00:10:15.639 "name": null, 00:10:15.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.639 "is_configured": false, 00:10:15.639 "data_offset": 0, 00:10:15.639 "data_size": 65536 00:10:15.639 }, 00:10:15.639 { 00:10:15.639 "name": "BaseBdev2", 00:10:15.639 "uuid": "7dffd24b-8605-4708-8486-6f56454c618f", 00:10:15.639 "is_configured": true, 00:10:15.639 "data_offset": 0, 00:10:15.639 "data_size": 65536 00:10:15.639 }, 00:10:15.639 { 00:10:15.639 "name": "BaseBdev3", 00:10:15.639 "uuid": "c2c1e48b-1320-4975-b4b7-dd7e86a56455", 00:10:15.639 "is_configured": true, 00:10:15.639 "data_offset": 0, 00:10:15.639 "data_size": 65536 00:10:15.639 }, 00:10:15.639 { 00:10:15.639 "name": "BaseBdev4", 00:10:15.639 "uuid": "bde647c8-c062-46f6-9185-d52024ce9831", 00:10:15.639 "is_configured": true, 00:10:15.639 "data_offset": 0, 00:10:15.639 "data_size": 65536 00:10:15.639 } 00:10:15.639 ] 00:10:15.639 }' 00:10:15.639 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.639 14:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.897 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:15.897 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:15.897 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.897 14:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:15.897 14:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.897 14:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.897 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.156 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:16.156 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:16.156 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:16.157 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.157 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.157 [2024-12-09 14:42:54.044398] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:16.157 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.157 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:16.157 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:16.157 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.157 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:16.157 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.157 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.157 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.157 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:16.157 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:16.157 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:16.157 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.157 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.157 [2024-12-09 14:42:54.214348] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:16.416 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.416 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:16.416 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:16.416 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.416 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:16.416 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.416 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.416 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.416 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:16.416 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:16.416 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:16.416 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.416 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.417 [2024-12-09 14:42:54.379201] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:16.417 [2024-12-09 14:42:54.379300] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:16.417 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.417 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:16.417 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:16.417 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.417 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:16.417 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.417 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.417 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.417 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:16.417 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:16.417 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:16.417 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:16.417 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:16.417 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:16.417 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.417 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.677 BaseBdev2 00:10:16.677 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.677 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:16.677 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:16.677 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:16.677 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:16.677 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:16.677 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:16.677 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:16.677 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.677 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.677 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.677 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:16.677 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.677 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.677 [ 00:10:16.677 { 00:10:16.677 "name": "BaseBdev2", 00:10:16.677 "aliases": [ 00:10:16.677 "eb73ed25-1b81-474d-b661-a009885120e1" 00:10:16.677 ], 00:10:16.677 "product_name": "Malloc disk", 00:10:16.677 "block_size": 512, 00:10:16.677 "num_blocks": 65536, 00:10:16.677 "uuid": "eb73ed25-1b81-474d-b661-a009885120e1", 00:10:16.677 "assigned_rate_limits": { 00:10:16.677 "rw_ios_per_sec": 0, 00:10:16.677 "rw_mbytes_per_sec": 0, 00:10:16.677 "r_mbytes_per_sec": 0, 00:10:16.677 "w_mbytes_per_sec": 0 00:10:16.677 }, 00:10:16.677 "claimed": false, 00:10:16.677 "zoned": false, 00:10:16.677 "supported_io_types": { 00:10:16.677 "read": true, 00:10:16.677 "write": true, 00:10:16.677 "unmap": true, 00:10:16.677 "flush": true, 00:10:16.677 "reset": true, 00:10:16.677 "nvme_admin": false, 00:10:16.677 "nvme_io": false, 00:10:16.677 "nvme_io_md": false, 00:10:16.677 "write_zeroes": true, 00:10:16.677 "zcopy": true, 00:10:16.677 "get_zone_info": false, 00:10:16.677 "zone_management": false, 00:10:16.677 "zone_append": false, 00:10:16.677 "compare": false, 00:10:16.677 "compare_and_write": false, 00:10:16.677 "abort": true, 00:10:16.677 "seek_hole": false, 00:10:16.677 "seek_data": false, 00:10:16.677 "copy": true, 00:10:16.677 "nvme_iov_md": false 00:10:16.677 }, 00:10:16.677 "memory_domains": [ 00:10:16.677 { 00:10:16.677 "dma_device_id": "system", 00:10:16.677 "dma_device_type": 1 00:10:16.677 }, 00:10:16.677 { 00:10:16.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.677 "dma_device_type": 2 00:10:16.677 } 00:10:16.677 ], 00:10:16.677 "driver_specific": {} 00:10:16.677 } 00:10:16.677 ] 00:10:16.677 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.677 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:16.677 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:16.677 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:16.677 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:16.677 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.677 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.677 BaseBdev3 00:10:16.677 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.677 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:16.677 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:16.677 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:16.677 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.678 [ 00:10:16.678 { 00:10:16.678 "name": "BaseBdev3", 00:10:16.678 "aliases": [ 00:10:16.678 "d35213c4-fb45-4e5a-9fa9-5cebe8d32797" 00:10:16.678 ], 00:10:16.678 "product_name": "Malloc disk", 00:10:16.678 "block_size": 512, 00:10:16.678 "num_blocks": 65536, 00:10:16.678 "uuid": "d35213c4-fb45-4e5a-9fa9-5cebe8d32797", 00:10:16.678 "assigned_rate_limits": { 00:10:16.678 "rw_ios_per_sec": 0, 00:10:16.678 "rw_mbytes_per_sec": 0, 00:10:16.678 "r_mbytes_per_sec": 0, 00:10:16.678 "w_mbytes_per_sec": 0 00:10:16.678 }, 00:10:16.678 "claimed": false, 00:10:16.678 "zoned": false, 00:10:16.678 "supported_io_types": { 00:10:16.678 "read": true, 00:10:16.678 "write": true, 00:10:16.678 "unmap": true, 00:10:16.678 "flush": true, 00:10:16.678 "reset": true, 00:10:16.678 "nvme_admin": false, 00:10:16.678 "nvme_io": false, 00:10:16.678 "nvme_io_md": false, 00:10:16.678 "write_zeroes": true, 00:10:16.678 "zcopy": true, 00:10:16.678 "get_zone_info": false, 00:10:16.678 "zone_management": false, 00:10:16.678 "zone_append": false, 00:10:16.678 "compare": false, 00:10:16.678 "compare_and_write": false, 00:10:16.678 "abort": true, 00:10:16.678 "seek_hole": false, 00:10:16.678 "seek_data": false, 00:10:16.678 "copy": true, 00:10:16.678 "nvme_iov_md": false 00:10:16.678 }, 00:10:16.678 "memory_domains": [ 00:10:16.678 { 00:10:16.678 "dma_device_id": "system", 00:10:16.678 "dma_device_type": 1 00:10:16.678 }, 00:10:16.678 { 00:10:16.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.678 "dma_device_type": 2 00:10:16.678 } 00:10:16.678 ], 00:10:16.678 "driver_specific": {} 00:10:16.678 } 00:10:16.678 ] 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.678 BaseBdev4 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.678 [ 00:10:16.678 { 00:10:16.678 "name": "BaseBdev4", 00:10:16.678 "aliases": [ 00:10:16.678 "e9ca8993-964e-4b3b-9989-c293f2b471b2" 00:10:16.678 ], 00:10:16.678 "product_name": "Malloc disk", 00:10:16.678 "block_size": 512, 00:10:16.678 "num_blocks": 65536, 00:10:16.678 "uuid": "e9ca8993-964e-4b3b-9989-c293f2b471b2", 00:10:16.678 "assigned_rate_limits": { 00:10:16.678 "rw_ios_per_sec": 0, 00:10:16.678 "rw_mbytes_per_sec": 0, 00:10:16.678 "r_mbytes_per_sec": 0, 00:10:16.678 "w_mbytes_per_sec": 0 00:10:16.678 }, 00:10:16.678 "claimed": false, 00:10:16.678 "zoned": false, 00:10:16.678 "supported_io_types": { 00:10:16.678 "read": true, 00:10:16.678 "write": true, 00:10:16.678 "unmap": true, 00:10:16.678 "flush": true, 00:10:16.678 "reset": true, 00:10:16.678 "nvme_admin": false, 00:10:16.678 "nvme_io": false, 00:10:16.678 "nvme_io_md": false, 00:10:16.678 "write_zeroes": true, 00:10:16.678 "zcopy": true, 00:10:16.678 "get_zone_info": false, 00:10:16.678 "zone_management": false, 00:10:16.678 "zone_append": false, 00:10:16.678 "compare": false, 00:10:16.678 "compare_and_write": false, 00:10:16.678 "abort": true, 00:10:16.678 "seek_hole": false, 00:10:16.678 "seek_data": false, 00:10:16.678 "copy": true, 00:10:16.678 "nvme_iov_md": false 00:10:16.678 }, 00:10:16.678 "memory_domains": [ 00:10:16.678 { 00:10:16.678 "dma_device_id": "system", 00:10:16.678 "dma_device_type": 1 00:10:16.678 }, 00:10:16.678 { 00:10:16.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.678 "dma_device_type": 2 00:10:16.678 } 00:10:16.678 ], 00:10:16.678 "driver_specific": {} 00:10:16.678 } 00:10:16.678 ] 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.678 [2024-12-09 14:42:54.787540] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:16.678 [2024-12-09 14:42:54.787668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:16.678 [2024-12-09 14:42:54.787735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:16.678 [2024-12-09 14:42:54.789729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:16.678 [2024-12-09 14:42:54.789831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.678 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.939 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.939 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.939 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.939 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.939 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.939 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.939 "name": "Existed_Raid", 00:10:16.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.939 "strip_size_kb": 64, 00:10:16.939 "state": "configuring", 00:10:16.939 "raid_level": "raid0", 00:10:16.939 "superblock": false, 00:10:16.939 "num_base_bdevs": 4, 00:10:16.939 "num_base_bdevs_discovered": 3, 00:10:16.939 "num_base_bdevs_operational": 4, 00:10:16.939 "base_bdevs_list": [ 00:10:16.939 { 00:10:16.939 "name": "BaseBdev1", 00:10:16.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.939 "is_configured": false, 00:10:16.939 "data_offset": 0, 00:10:16.939 "data_size": 0 00:10:16.939 }, 00:10:16.939 { 00:10:16.939 "name": "BaseBdev2", 00:10:16.939 "uuid": "eb73ed25-1b81-474d-b661-a009885120e1", 00:10:16.939 "is_configured": true, 00:10:16.939 "data_offset": 0, 00:10:16.939 "data_size": 65536 00:10:16.939 }, 00:10:16.939 { 00:10:16.939 "name": "BaseBdev3", 00:10:16.939 "uuid": "d35213c4-fb45-4e5a-9fa9-5cebe8d32797", 00:10:16.939 "is_configured": true, 00:10:16.939 "data_offset": 0, 00:10:16.939 "data_size": 65536 00:10:16.939 }, 00:10:16.939 { 00:10:16.939 "name": "BaseBdev4", 00:10:16.939 "uuid": "e9ca8993-964e-4b3b-9989-c293f2b471b2", 00:10:16.939 "is_configured": true, 00:10:16.939 "data_offset": 0, 00:10:16.939 "data_size": 65536 00:10:16.939 } 00:10:16.939 ] 00:10:16.939 }' 00:10:16.939 14:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.939 14:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.198 14:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:17.198 14:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.198 14:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.198 [2024-12-09 14:42:55.270742] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:17.198 14:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.198 14:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:17.198 14:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.198 14:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.198 14:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.198 14:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.198 14:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.198 14:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.198 14:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.198 14:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.198 14:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.198 14:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.198 14:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.198 14:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.198 14:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.198 14:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.458 14:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.458 "name": "Existed_Raid", 00:10:17.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.458 "strip_size_kb": 64, 00:10:17.458 "state": "configuring", 00:10:17.458 "raid_level": "raid0", 00:10:17.458 "superblock": false, 00:10:17.458 "num_base_bdevs": 4, 00:10:17.458 "num_base_bdevs_discovered": 2, 00:10:17.458 "num_base_bdevs_operational": 4, 00:10:17.458 "base_bdevs_list": [ 00:10:17.458 { 00:10:17.458 "name": "BaseBdev1", 00:10:17.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.458 "is_configured": false, 00:10:17.458 "data_offset": 0, 00:10:17.458 "data_size": 0 00:10:17.458 }, 00:10:17.458 { 00:10:17.458 "name": null, 00:10:17.458 "uuid": "eb73ed25-1b81-474d-b661-a009885120e1", 00:10:17.458 "is_configured": false, 00:10:17.458 "data_offset": 0, 00:10:17.458 "data_size": 65536 00:10:17.458 }, 00:10:17.458 { 00:10:17.458 "name": "BaseBdev3", 00:10:17.458 "uuid": "d35213c4-fb45-4e5a-9fa9-5cebe8d32797", 00:10:17.458 "is_configured": true, 00:10:17.458 "data_offset": 0, 00:10:17.458 "data_size": 65536 00:10:17.458 }, 00:10:17.458 { 00:10:17.458 "name": "BaseBdev4", 00:10:17.458 "uuid": "e9ca8993-964e-4b3b-9989-c293f2b471b2", 00:10:17.458 "is_configured": true, 00:10:17.458 "data_offset": 0, 00:10:17.458 "data_size": 65536 00:10:17.458 } 00:10:17.458 ] 00:10:17.458 }' 00:10:17.458 14:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.458 14:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.717 14:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.717 14:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:17.717 14:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.717 14:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.717 14:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.717 14:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:17.717 14:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:17.717 14:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.717 14:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.977 [2024-12-09 14:42:55.847602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:17.977 BaseBdev1 00:10:17.977 14:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.977 14:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:17.977 14:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:17.977 14:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:17.977 14:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:17.977 14:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:17.977 14:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:17.977 14:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:17.977 14:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.977 14:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.977 14:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.977 14:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:17.977 14:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.977 14:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.977 [ 00:10:17.977 { 00:10:17.977 "name": "BaseBdev1", 00:10:17.977 "aliases": [ 00:10:17.977 "9224f2fc-a8eb-4723-955a-c8a51fe24006" 00:10:17.977 ], 00:10:17.977 "product_name": "Malloc disk", 00:10:17.977 "block_size": 512, 00:10:17.977 "num_blocks": 65536, 00:10:17.977 "uuid": "9224f2fc-a8eb-4723-955a-c8a51fe24006", 00:10:17.977 "assigned_rate_limits": { 00:10:17.977 "rw_ios_per_sec": 0, 00:10:17.977 "rw_mbytes_per_sec": 0, 00:10:17.977 "r_mbytes_per_sec": 0, 00:10:17.977 "w_mbytes_per_sec": 0 00:10:17.977 }, 00:10:17.977 "claimed": true, 00:10:17.977 "claim_type": "exclusive_write", 00:10:17.977 "zoned": false, 00:10:17.977 "supported_io_types": { 00:10:17.977 "read": true, 00:10:17.977 "write": true, 00:10:17.977 "unmap": true, 00:10:17.977 "flush": true, 00:10:17.977 "reset": true, 00:10:17.977 "nvme_admin": false, 00:10:17.977 "nvme_io": false, 00:10:17.977 "nvme_io_md": false, 00:10:17.977 "write_zeroes": true, 00:10:17.977 "zcopy": true, 00:10:17.977 "get_zone_info": false, 00:10:17.977 "zone_management": false, 00:10:17.977 "zone_append": false, 00:10:17.977 "compare": false, 00:10:17.977 "compare_and_write": false, 00:10:17.977 "abort": true, 00:10:17.977 "seek_hole": false, 00:10:17.977 "seek_data": false, 00:10:17.977 "copy": true, 00:10:17.977 "nvme_iov_md": false 00:10:17.977 }, 00:10:17.977 "memory_domains": [ 00:10:17.977 { 00:10:17.977 "dma_device_id": "system", 00:10:17.977 "dma_device_type": 1 00:10:17.977 }, 00:10:17.977 { 00:10:17.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.977 "dma_device_type": 2 00:10:17.977 } 00:10:17.977 ], 00:10:17.977 "driver_specific": {} 00:10:17.977 } 00:10:17.977 ] 00:10:17.977 14:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.977 14:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:17.977 14:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:17.977 14:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.977 14:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.977 14:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.977 14:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.977 14:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.977 14:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.977 14:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.977 14:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.977 14:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.977 14:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.977 14:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.977 14:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.977 14:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.977 14:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.977 14:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.977 "name": "Existed_Raid", 00:10:17.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.977 "strip_size_kb": 64, 00:10:17.977 "state": "configuring", 00:10:17.977 "raid_level": "raid0", 00:10:17.977 "superblock": false, 00:10:17.977 "num_base_bdevs": 4, 00:10:17.977 "num_base_bdevs_discovered": 3, 00:10:17.977 "num_base_bdevs_operational": 4, 00:10:17.977 "base_bdevs_list": [ 00:10:17.977 { 00:10:17.977 "name": "BaseBdev1", 00:10:17.977 "uuid": "9224f2fc-a8eb-4723-955a-c8a51fe24006", 00:10:17.977 "is_configured": true, 00:10:17.977 "data_offset": 0, 00:10:17.977 "data_size": 65536 00:10:17.977 }, 00:10:17.977 { 00:10:17.977 "name": null, 00:10:17.977 "uuid": "eb73ed25-1b81-474d-b661-a009885120e1", 00:10:17.977 "is_configured": false, 00:10:17.977 "data_offset": 0, 00:10:17.977 "data_size": 65536 00:10:17.977 }, 00:10:17.977 { 00:10:17.977 "name": "BaseBdev3", 00:10:17.977 "uuid": "d35213c4-fb45-4e5a-9fa9-5cebe8d32797", 00:10:17.977 "is_configured": true, 00:10:17.977 "data_offset": 0, 00:10:17.977 "data_size": 65536 00:10:17.977 }, 00:10:17.977 { 00:10:17.977 "name": "BaseBdev4", 00:10:17.977 "uuid": "e9ca8993-964e-4b3b-9989-c293f2b471b2", 00:10:17.977 "is_configured": true, 00:10:17.977 "data_offset": 0, 00:10:17.977 "data_size": 65536 00:10:17.977 } 00:10:17.977 ] 00:10:17.977 }' 00:10:17.977 14:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.977 14:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.546 14:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.546 14:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:18.546 14:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.546 14:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.546 14:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.546 14:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:18.546 14:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:18.546 14:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.546 14:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.546 [2024-12-09 14:42:56.406892] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:18.546 14:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.546 14:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:18.546 14:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.546 14:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.546 14:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:18.546 14:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.546 14:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.546 14:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.546 14:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.546 14:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.546 14:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.546 14:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.546 14:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.546 14:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.546 14:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.546 14:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.546 14:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.546 "name": "Existed_Raid", 00:10:18.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.546 "strip_size_kb": 64, 00:10:18.546 "state": "configuring", 00:10:18.546 "raid_level": "raid0", 00:10:18.546 "superblock": false, 00:10:18.546 "num_base_bdevs": 4, 00:10:18.546 "num_base_bdevs_discovered": 2, 00:10:18.546 "num_base_bdevs_operational": 4, 00:10:18.546 "base_bdevs_list": [ 00:10:18.546 { 00:10:18.546 "name": "BaseBdev1", 00:10:18.546 "uuid": "9224f2fc-a8eb-4723-955a-c8a51fe24006", 00:10:18.546 "is_configured": true, 00:10:18.546 "data_offset": 0, 00:10:18.546 "data_size": 65536 00:10:18.546 }, 00:10:18.546 { 00:10:18.546 "name": null, 00:10:18.546 "uuid": "eb73ed25-1b81-474d-b661-a009885120e1", 00:10:18.546 "is_configured": false, 00:10:18.546 "data_offset": 0, 00:10:18.546 "data_size": 65536 00:10:18.546 }, 00:10:18.546 { 00:10:18.546 "name": null, 00:10:18.546 "uuid": "d35213c4-fb45-4e5a-9fa9-5cebe8d32797", 00:10:18.546 "is_configured": false, 00:10:18.546 "data_offset": 0, 00:10:18.546 "data_size": 65536 00:10:18.546 }, 00:10:18.547 { 00:10:18.547 "name": "BaseBdev4", 00:10:18.547 "uuid": "e9ca8993-964e-4b3b-9989-c293f2b471b2", 00:10:18.547 "is_configured": true, 00:10:18.547 "data_offset": 0, 00:10:18.547 "data_size": 65536 00:10:18.547 } 00:10:18.547 ] 00:10:18.547 }' 00:10:18.547 14:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.547 14:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.807 14:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.807 14:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:18.807 14:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.807 14:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.807 14:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.807 14:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:18.807 14:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:18.807 14:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.807 14:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.067 [2024-12-09 14:42:56.929946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:19.067 14:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.067 14:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:19.067 14:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.067 14:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.067 14:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:19.067 14:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.067 14:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.067 14:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.067 14:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.067 14:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.067 14:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.067 14:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.067 14:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.067 14:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.067 14:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.067 14:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.067 14:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.067 "name": "Existed_Raid", 00:10:19.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.067 "strip_size_kb": 64, 00:10:19.067 "state": "configuring", 00:10:19.067 "raid_level": "raid0", 00:10:19.067 "superblock": false, 00:10:19.067 "num_base_bdevs": 4, 00:10:19.067 "num_base_bdevs_discovered": 3, 00:10:19.067 "num_base_bdevs_operational": 4, 00:10:19.067 "base_bdevs_list": [ 00:10:19.067 { 00:10:19.067 "name": "BaseBdev1", 00:10:19.067 "uuid": "9224f2fc-a8eb-4723-955a-c8a51fe24006", 00:10:19.067 "is_configured": true, 00:10:19.067 "data_offset": 0, 00:10:19.067 "data_size": 65536 00:10:19.067 }, 00:10:19.067 { 00:10:19.067 "name": null, 00:10:19.067 "uuid": "eb73ed25-1b81-474d-b661-a009885120e1", 00:10:19.067 "is_configured": false, 00:10:19.067 "data_offset": 0, 00:10:19.067 "data_size": 65536 00:10:19.067 }, 00:10:19.067 { 00:10:19.067 "name": "BaseBdev3", 00:10:19.067 "uuid": "d35213c4-fb45-4e5a-9fa9-5cebe8d32797", 00:10:19.067 "is_configured": true, 00:10:19.067 "data_offset": 0, 00:10:19.067 "data_size": 65536 00:10:19.067 }, 00:10:19.067 { 00:10:19.067 "name": "BaseBdev4", 00:10:19.067 "uuid": "e9ca8993-964e-4b3b-9989-c293f2b471b2", 00:10:19.067 "is_configured": true, 00:10:19.067 "data_offset": 0, 00:10:19.067 "data_size": 65536 00:10:19.067 } 00:10:19.067 ] 00:10:19.067 }' 00:10:19.067 14:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.067 14:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.327 14:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.327 14:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.327 14:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.327 14:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:19.327 14:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.327 14:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:19.327 14:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:19.327 14:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.327 14:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.327 [2024-12-09 14:42:57.421172] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:19.587 14:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.587 14:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:19.587 14:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.587 14:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.587 14:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:19.587 14:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.587 14:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.587 14:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.587 14:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.587 14:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.587 14:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.587 14:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.587 14:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.587 14:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.587 14:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.587 14:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.587 14:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.587 "name": "Existed_Raid", 00:10:19.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.587 "strip_size_kb": 64, 00:10:19.587 "state": "configuring", 00:10:19.587 "raid_level": "raid0", 00:10:19.587 "superblock": false, 00:10:19.587 "num_base_bdevs": 4, 00:10:19.587 "num_base_bdevs_discovered": 2, 00:10:19.587 "num_base_bdevs_operational": 4, 00:10:19.587 "base_bdevs_list": [ 00:10:19.587 { 00:10:19.587 "name": null, 00:10:19.587 "uuid": "9224f2fc-a8eb-4723-955a-c8a51fe24006", 00:10:19.587 "is_configured": false, 00:10:19.587 "data_offset": 0, 00:10:19.587 "data_size": 65536 00:10:19.587 }, 00:10:19.587 { 00:10:19.587 "name": null, 00:10:19.587 "uuid": "eb73ed25-1b81-474d-b661-a009885120e1", 00:10:19.587 "is_configured": false, 00:10:19.587 "data_offset": 0, 00:10:19.587 "data_size": 65536 00:10:19.587 }, 00:10:19.587 { 00:10:19.587 "name": "BaseBdev3", 00:10:19.587 "uuid": "d35213c4-fb45-4e5a-9fa9-5cebe8d32797", 00:10:19.587 "is_configured": true, 00:10:19.587 "data_offset": 0, 00:10:19.587 "data_size": 65536 00:10:19.587 }, 00:10:19.587 { 00:10:19.587 "name": "BaseBdev4", 00:10:19.587 "uuid": "e9ca8993-964e-4b3b-9989-c293f2b471b2", 00:10:19.587 "is_configured": true, 00:10:19.587 "data_offset": 0, 00:10:19.587 "data_size": 65536 00:10:19.587 } 00:10:19.587 ] 00:10:19.587 }' 00:10:19.587 14:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.587 14:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.847 14:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:19.847 14:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.847 14:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.847 14:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.847 14:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.847 14:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:19.847 14:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:19.847 14:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.847 14:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.847 [2024-12-09 14:42:57.959903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:19.847 14:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.847 14:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:19.847 14:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.847 14:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.847 14:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:19.847 14:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.847 14:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.847 14:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.847 14:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.847 14:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.847 14:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.132 14:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.132 14:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.132 14:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.132 14:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.132 14:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.132 14:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.132 "name": "Existed_Raid", 00:10:20.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.132 "strip_size_kb": 64, 00:10:20.132 "state": "configuring", 00:10:20.132 "raid_level": "raid0", 00:10:20.132 "superblock": false, 00:10:20.132 "num_base_bdevs": 4, 00:10:20.132 "num_base_bdevs_discovered": 3, 00:10:20.132 "num_base_bdevs_operational": 4, 00:10:20.132 "base_bdevs_list": [ 00:10:20.132 { 00:10:20.132 "name": null, 00:10:20.132 "uuid": "9224f2fc-a8eb-4723-955a-c8a51fe24006", 00:10:20.132 "is_configured": false, 00:10:20.132 "data_offset": 0, 00:10:20.132 "data_size": 65536 00:10:20.132 }, 00:10:20.132 { 00:10:20.132 "name": "BaseBdev2", 00:10:20.132 "uuid": "eb73ed25-1b81-474d-b661-a009885120e1", 00:10:20.132 "is_configured": true, 00:10:20.132 "data_offset": 0, 00:10:20.132 "data_size": 65536 00:10:20.132 }, 00:10:20.132 { 00:10:20.132 "name": "BaseBdev3", 00:10:20.132 "uuid": "d35213c4-fb45-4e5a-9fa9-5cebe8d32797", 00:10:20.132 "is_configured": true, 00:10:20.132 "data_offset": 0, 00:10:20.132 "data_size": 65536 00:10:20.132 }, 00:10:20.132 { 00:10:20.132 "name": "BaseBdev4", 00:10:20.132 "uuid": "e9ca8993-964e-4b3b-9989-c293f2b471b2", 00:10:20.132 "is_configured": true, 00:10:20.132 "data_offset": 0, 00:10:20.132 "data_size": 65536 00:10:20.132 } 00:10:20.132 ] 00:10:20.132 }' 00:10:20.132 14:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.132 14:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.391 14:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.391 14:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:20.391 14:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.391 14:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.391 14:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.391 14:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:20.391 14:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.391 14:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:20.391 14:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.391 14:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.391 14:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.651 14:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9224f2fc-a8eb-4723-955a-c8a51fe24006 00:10:20.651 14:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.651 14:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.651 [2024-12-09 14:42:58.558731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:20.651 [2024-12-09 14:42:58.558962] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:20.651 [2024-12-09 14:42:58.558996] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:20.651 [2024-12-09 14:42:58.559375] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:20.651 [2024-12-09 14:42:58.559624] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:20.651 [2024-12-09 14:42:58.559645] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:20.651 [2024-12-09 14:42:58.559990] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.651 NewBaseBdev 00:10:20.651 14:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.651 14:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:20.651 14:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:20.651 14:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:20.651 14:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:20.651 14:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:20.651 14:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:20.651 14:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:20.651 14:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.651 14:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.651 14:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.651 14:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:20.651 14:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.651 14:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.651 [ 00:10:20.651 { 00:10:20.651 "name": "NewBaseBdev", 00:10:20.651 "aliases": [ 00:10:20.651 "9224f2fc-a8eb-4723-955a-c8a51fe24006" 00:10:20.651 ], 00:10:20.651 "product_name": "Malloc disk", 00:10:20.651 "block_size": 512, 00:10:20.651 "num_blocks": 65536, 00:10:20.651 "uuid": "9224f2fc-a8eb-4723-955a-c8a51fe24006", 00:10:20.651 "assigned_rate_limits": { 00:10:20.651 "rw_ios_per_sec": 0, 00:10:20.651 "rw_mbytes_per_sec": 0, 00:10:20.651 "r_mbytes_per_sec": 0, 00:10:20.651 "w_mbytes_per_sec": 0 00:10:20.651 }, 00:10:20.651 "claimed": true, 00:10:20.651 "claim_type": "exclusive_write", 00:10:20.651 "zoned": false, 00:10:20.651 "supported_io_types": { 00:10:20.651 "read": true, 00:10:20.651 "write": true, 00:10:20.651 "unmap": true, 00:10:20.651 "flush": true, 00:10:20.651 "reset": true, 00:10:20.651 "nvme_admin": false, 00:10:20.651 "nvme_io": false, 00:10:20.651 "nvme_io_md": false, 00:10:20.651 "write_zeroes": true, 00:10:20.651 "zcopy": true, 00:10:20.651 "get_zone_info": false, 00:10:20.651 "zone_management": false, 00:10:20.651 "zone_append": false, 00:10:20.651 "compare": false, 00:10:20.651 "compare_and_write": false, 00:10:20.651 "abort": true, 00:10:20.651 "seek_hole": false, 00:10:20.651 "seek_data": false, 00:10:20.651 "copy": true, 00:10:20.651 "nvme_iov_md": false 00:10:20.651 }, 00:10:20.651 "memory_domains": [ 00:10:20.651 { 00:10:20.651 "dma_device_id": "system", 00:10:20.651 "dma_device_type": 1 00:10:20.651 }, 00:10:20.651 { 00:10:20.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.651 "dma_device_type": 2 00:10:20.651 } 00:10:20.651 ], 00:10:20.651 "driver_specific": {} 00:10:20.651 } 00:10:20.651 ] 00:10:20.651 14:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.651 14:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:20.651 14:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:20.651 14:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.651 14:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.651 14:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.651 14:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.651 14:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.651 14:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.651 14:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.651 14:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.651 14:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.651 14:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.651 14:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.651 14:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.651 14:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.651 14:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.652 14:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.652 "name": "Existed_Raid", 00:10:20.652 "uuid": "e2a97894-fb3e-4d6e-ba8e-428efcdd7a36", 00:10:20.652 "strip_size_kb": 64, 00:10:20.652 "state": "online", 00:10:20.652 "raid_level": "raid0", 00:10:20.652 "superblock": false, 00:10:20.652 "num_base_bdevs": 4, 00:10:20.652 "num_base_bdevs_discovered": 4, 00:10:20.652 "num_base_bdevs_operational": 4, 00:10:20.652 "base_bdevs_list": [ 00:10:20.652 { 00:10:20.652 "name": "NewBaseBdev", 00:10:20.652 "uuid": "9224f2fc-a8eb-4723-955a-c8a51fe24006", 00:10:20.652 "is_configured": true, 00:10:20.652 "data_offset": 0, 00:10:20.652 "data_size": 65536 00:10:20.652 }, 00:10:20.652 { 00:10:20.652 "name": "BaseBdev2", 00:10:20.652 "uuid": "eb73ed25-1b81-474d-b661-a009885120e1", 00:10:20.652 "is_configured": true, 00:10:20.652 "data_offset": 0, 00:10:20.652 "data_size": 65536 00:10:20.652 }, 00:10:20.652 { 00:10:20.652 "name": "BaseBdev3", 00:10:20.652 "uuid": "d35213c4-fb45-4e5a-9fa9-5cebe8d32797", 00:10:20.652 "is_configured": true, 00:10:20.652 "data_offset": 0, 00:10:20.652 "data_size": 65536 00:10:20.652 }, 00:10:20.652 { 00:10:20.652 "name": "BaseBdev4", 00:10:20.652 "uuid": "e9ca8993-964e-4b3b-9989-c293f2b471b2", 00:10:20.652 "is_configured": true, 00:10:20.652 "data_offset": 0, 00:10:20.652 "data_size": 65536 00:10:20.652 } 00:10:20.652 ] 00:10:20.652 }' 00:10:20.652 14:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.652 14:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.222 14:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:21.222 14:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:21.222 14:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:21.222 14:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:21.222 14:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:21.222 14:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:21.222 14:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:21.222 14:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.222 14:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.222 14:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:21.222 [2024-12-09 14:42:59.094364] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:21.222 14:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.223 14:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:21.223 "name": "Existed_Raid", 00:10:21.223 "aliases": [ 00:10:21.223 "e2a97894-fb3e-4d6e-ba8e-428efcdd7a36" 00:10:21.223 ], 00:10:21.223 "product_name": "Raid Volume", 00:10:21.223 "block_size": 512, 00:10:21.223 "num_blocks": 262144, 00:10:21.223 "uuid": "e2a97894-fb3e-4d6e-ba8e-428efcdd7a36", 00:10:21.223 "assigned_rate_limits": { 00:10:21.223 "rw_ios_per_sec": 0, 00:10:21.223 "rw_mbytes_per_sec": 0, 00:10:21.223 "r_mbytes_per_sec": 0, 00:10:21.223 "w_mbytes_per_sec": 0 00:10:21.223 }, 00:10:21.223 "claimed": false, 00:10:21.223 "zoned": false, 00:10:21.223 "supported_io_types": { 00:10:21.223 "read": true, 00:10:21.223 "write": true, 00:10:21.223 "unmap": true, 00:10:21.223 "flush": true, 00:10:21.223 "reset": true, 00:10:21.223 "nvme_admin": false, 00:10:21.223 "nvme_io": false, 00:10:21.223 "nvme_io_md": false, 00:10:21.223 "write_zeroes": true, 00:10:21.223 "zcopy": false, 00:10:21.223 "get_zone_info": false, 00:10:21.223 "zone_management": false, 00:10:21.223 "zone_append": false, 00:10:21.223 "compare": false, 00:10:21.223 "compare_and_write": false, 00:10:21.223 "abort": false, 00:10:21.223 "seek_hole": false, 00:10:21.223 "seek_data": false, 00:10:21.223 "copy": false, 00:10:21.223 "nvme_iov_md": false 00:10:21.223 }, 00:10:21.223 "memory_domains": [ 00:10:21.223 { 00:10:21.223 "dma_device_id": "system", 00:10:21.223 "dma_device_type": 1 00:10:21.223 }, 00:10:21.223 { 00:10:21.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.223 "dma_device_type": 2 00:10:21.223 }, 00:10:21.223 { 00:10:21.223 "dma_device_id": "system", 00:10:21.223 "dma_device_type": 1 00:10:21.223 }, 00:10:21.223 { 00:10:21.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.223 "dma_device_type": 2 00:10:21.223 }, 00:10:21.223 { 00:10:21.223 "dma_device_id": "system", 00:10:21.223 "dma_device_type": 1 00:10:21.223 }, 00:10:21.223 { 00:10:21.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.223 "dma_device_type": 2 00:10:21.223 }, 00:10:21.223 { 00:10:21.223 "dma_device_id": "system", 00:10:21.223 "dma_device_type": 1 00:10:21.223 }, 00:10:21.223 { 00:10:21.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.223 "dma_device_type": 2 00:10:21.223 } 00:10:21.223 ], 00:10:21.223 "driver_specific": { 00:10:21.223 "raid": { 00:10:21.223 "uuid": "e2a97894-fb3e-4d6e-ba8e-428efcdd7a36", 00:10:21.223 "strip_size_kb": 64, 00:10:21.223 "state": "online", 00:10:21.223 "raid_level": "raid0", 00:10:21.223 "superblock": false, 00:10:21.223 "num_base_bdevs": 4, 00:10:21.223 "num_base_bdevs_discovered": 4, 00:10:21.223 "num_base_bdevs_operational": 4, 00:10:21.223 "base_bdevs_list": [ 00:10:21.223 { 00:10:21.223 "name": "NewBaseBdev", 00:10:21.223 "uuid": "9224f2fc-a8eb-4723-955a-c8a51fe24006", 00:10:21.223 "is_configured": true, 00:10:21.223 "data_offset": 0, 00:10:21.223 "data_size": 65536 00:10:21.223 }, 00:10:21.223 { 00:10:21.223 "name": "BaseBdev2", 00:10:21.223 "uuid": "eb73ed25-1b81-474d-b661-a009885120e1", 00:10:21.223 "is_configured": true, 00:10:21.223 "data_offset": 0, 00:10:21.223 "data_size": 65536 00:10:21.223 }, 00:10:21.223 { 00:10:21.223 "name": "BaseBdev3", 00:10:21.223 "uuid": "d35213c4-fb45-4e5a-9fa9-5cebe8d32797", 00:10:21.223 "is_configured": true, 00:10:21.223 "data_offset": 0, 00:10:21.223 "data_size": 65536 00:10:21.223 }, 00:10:21.223 { 00:10:21.223 "name": "BaseBdev4", 00:10:21.223 "uuid": "e9ca8993-964e-4b3b-9989-c293f2b471b2", 00:10:21.223 "is_configured": true, 00:10:21.223 "data_offset": 0, 00:10:21.223 "data_size": 65536 00:10:21.223 } 00:10:21.223 ] 00:10:21.223 } 00:10:21.223 } 00:10:21.223 }' 00:10:21.223 14:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:21.223 14:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:21.223 BaseBdev2 00:10:21.223 BaseBdev3 00:10:21.223 BaseBdev4' 00:10:21.223 14:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.223 14:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:21.223 14:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.223 14:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:21.223 14:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.223 14:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.223 14:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.223 14:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.223 14:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.223 14:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.223 14:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.223 14:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:21.223 14:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.223 14:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.223 14:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.223 14:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.223 14:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.223 14:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.223 14:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.223 14:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:21.223 14:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.223 14:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.223 14:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.483 14:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.483 14:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.483 14:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.483 14:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.483 14:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:21.483 14:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.483 14:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.483 14:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.483 14:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.483 14:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.483 14:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.483 14:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:21.483 14:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.483 14:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.483 [2024-12-09 14:42:59.441376] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:21.483 [2024-12-09 14:42:59.441441] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:21.483 [2024-12-09 14:42:59.441575] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:21.483 [2024-12-09 14:42:59.441686] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:21.483 [2024-12-09 14:42:59.441700] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:21.483 14:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.483 14:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 70652 00:10:21.483 14:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 70652 ']' 00:10:21.483 14:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 70652 00:10:21.483 14:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:21.483 14:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:21.483 14:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70652 00:10:21.483 killing process with pid 70652 00:10:21.483 14:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:21.483 14:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:21.483 14:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70652' 00:10:21.483 14:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 70652 00:10:21.483 [2024-12-09 14:42:59.487952] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:21.483 14:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 70652 00:10:22.053 [2024-12-09 14:42:59.978051] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:23.435 00:10:23.435 real 0m12.237s 00:10:23.435 user 0m19.259s 00:10:23.435 sys 0m2.176s 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.435 ************************************ 00:10:23.435 END TEST raid_state_function_test 00:10:23.435 ************************************ 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.435 14:43:01 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:23.435 14:43:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:23.435 14:43:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.435 14:43:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:23.435 ************************************ 00:10:23.435 START TEST raid_state_function_test_sb 00:10:23.435 ************************************ 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71329 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71329' 00:10:23.435 Process raid pid: 71329 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71329 00:10:23.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 71329 ']' 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:23.435 14:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.435 [2024-12-09 14:43:01.475657] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:10:23.435 [2024-12-09 14:43:01.475780] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:23.695 [2024-12-09 14:43:01.651664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.695 [2024-12-09 14:43:01.803063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.955 [2024-12-09 14:43:02.056903] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.955 [2024-12-09 14:43:02.056977] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.214 14:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:24.214 14:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:24.214 14:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:24.214 14:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.214 14:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.214 [2024-12-09 14:43:02.330587] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:24.214 [2024-12-09 14:43:02.330679] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:24.214 [2024-12-09 14:43:02.330693] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:24.214 [2024-12-09 14:43:02.330707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:24.214 [2024-12-09 14:43:02.330716] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:24.214 [2024-12-09 14:43:02.330727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:24.214 [2024-12-09 14:43:02.330735] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:24.214 [2024-12-09 14:43:02.330747] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:24.474 14:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.474 14:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:24.474 14:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.474 14:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.474 14:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:24.474 14:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.474 14:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.474 14:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.474 14:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.474 14:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.474 14:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.474 14:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.474 14:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.474 14:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.474 14:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.474 14:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.474 14:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.474 "name": "Existed_Raid", 00:10:24.474 "uuid": "f4f7b642-0c7a-48ef-bfbd-489f38b6b3f3", 00:10:24.474 "strip_size_kb": 64, 00:10:24.474 "state": "configuring", 00:10:24.474 "raid_level": "raid0", 00:10:24.474 "superblock": true, 00:10:24.474 "num_base_bdevs": 4, 00:10:24.474 "num_base_bdevs_discovered": 0, 00:10:24.474 "num_base_bdevs_operational": 4, 00:10:24.474 "base_bdevs_list": [ 00:10:24.474 { 00:10:24.474 "name": "BaseBdev1", 00:10:24.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.474 "is_configured": false, 00:10:24.474 "data_offset": 0, 00:10:24.474 "data_size": 0 00:10:24.474 }, 00:10:24.474 { 00:10:24.474 "name": "BaseBdev2", 00:10:24.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.474 "is_configured": false, 00:10:24.474 "data_offset": 0, 00:10:24.474 "data_size": 0 00:10:24.474 }, 00:10:24.474 { 00:10:24.474 "name": "BaseBdev3", 00:10:24.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.474 "is_configured": false, 00:10:24.474 "data_offset": 0, 00:10:24.474 "data_size": 0 00:10:24.474 }, 00:10:24.474 { 00:10:24.474 "name": "BaseBdev4", 00:10:24.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.474 "is_configured": false, 00:10:24.474 "data_offset": 0, 00:10:24.474 "data_size": 0 00:10:24.474 } 00:10:24.474 ] 00:10:24.474 }' 00:10:24.474 14:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.474 14:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.734 14:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:24.734 14:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.734 14:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.734 [2024-12-09 14:43:02.789827] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:24.734 [2024-12-09 14:43:02.789999] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:24.734 14:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.734 14:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:24.734 14:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.734 14:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.734 [2024-12-09 14:43:02.797756] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:24.734 [2024-12-09 14:43:02.797864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:24.734 [2024-12-09 14:43:02.797902] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:24.734 [2024-12-09 14:43:02.797934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:24.734 [2024-12-09 14:43:02.797977] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:24.734 [2024-12-09 14:43:02.798010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:24.734 [2024-12-09 14:43:02.798044] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:24.734 [2024-12-09 14:43:02.798076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:24.734 14:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.734 14:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:24.734 14:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.734 14:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.734 [2024-12-09 14:43:02.850837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:24.734 BaseBdev1 00:10:24.734 14:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.734 14:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:24.734 14:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:24.734 14:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:24.734 14:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:24.734 14:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:24.734 14:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:24.734 14:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:24.734 14:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.734 14:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.994 14:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.994 14:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:24.994 14:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.994 14:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.994 [ 00:10:24.994 { 00:10:24.994 "name": "BaseBdev1", 00:10:24.994 "aliases": [ 00:10:24.994 "bbf48ffe-be3d-4541-a8f7-aa1bbf72614b" 00:10:24.994 ], 00:10:24.994 "product_name": "Malloc disk", 00:10:24.994 "block_size": 512, 00:10:24.994 "num_blocks": 65536, 00:10:24.994 "uuid": "bbf48ffe-be3d-4541-a8f7-aa1bbf72614b", 00:10:24.994 "assigned_rate_limits": { 00:10:24.994 "rw_ios_per_sec": 0, 00:10:24.994 "rw_mbytes_per_sec": 0, 00:10:24.994 "r_mbytes_per_sec": 0, 00:10:24.994 "w_mbytes_per_sec": 0 00:10:24.994 }, 00:10:24.994 "claimed": true, 00:10:24.994 "claim_type": "exclusive_write", 00:10:24.994 "zoned": false, 00:10:24.994 "supported_io_types": { 00:10:24.994 "read": true, 00:10:24.994 "write": true, 00:10:24.994 "unmap": true, 00:10:24.994 "flush": true, 00:10:24.994 "reset": true, 00:10:24.994 "nvme_admin": false, 00:10:24.994 "nvme_io": false, 00:10:24.994 "nvme_io_md": false, 00:10:24.994 "write_zeroes": true, 00:10:24.994 "zcopy": true, 00:10:24.994 "get_zone_info": false, 00:10:24.994 "zone_management": false, 00:10:24.994 "zone_append": false, 00:10:24.994 "compare": false, 00:10:24.994 "compare_and_write": false, 00:10:24.994 "abort": true, 00:10:24.994 "seek_hole": false, 00:10:24.994 "seek_data": false, 00:10:24.994 "copy": true, 00:10:24.994 "nvme_iov_md": false 00:10:24.994 }, 00:10:24.994 "memory_domains": [ 00:10:24.994 { 00:10:24.994 "dma_device_id": "system", 00:10:24.994 "dma_device_type": 1 00:10:24.994 }, 00:10:24.994 { 00:10:24.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.994 "dma_device_type": 2 00:10:24.994 } 00:10:24.994 ], 00:10:24.994 "driver_specific": {} 00:10:24.994 } 00:10:24.994 ] 00:10:24.994 14:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.994 14:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:24.994 14:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:24.994 14:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.994 14:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.994 14:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:24.994 14:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.995 14:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.995 14:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.995 14:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.995 14:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.995 14:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.995 14:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.995 14:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.995 14:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.995 14:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.995 14:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.995 14:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.995 "name": "Existed_Raid", 00:10:24.995 "uuid": "c34120ee-b7dd-45fe-ba25-379d1976252c", 00:10:24.995 "strip_size_kb": 64, 00:10:24.995 "state": "configuring", 00:10:24.995 "raid_level": "raid0", 00:10:24.995 "superblock": true, 00:10:24.995 "num_base_bdevs": 4, 00:10:24.995 "num_base_bdevs_discovered": 1, 00:10:24.995 "num_base_bdevs_operational": 4, 00:10:24.995 "base_bdevs_list": [ 00:10:24.995 { 00:10:24.995 "name": "BaseBdev1", 00:10:24.995 "uuid": "bbf48ffe-be3d-4541-a8f7-aa1bbf72614b", 00:10:24.995 "is_configured": true, 00:10:24.995 "data_offset": 2048, 00:10:24.995 "data_size": 63488 00:10:24.995 }, 00:10:24.995 { 00:10:24.995 "name": "BaseBdev2", 00:10:24.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.995 "is_configured": false, 00:10:24.995 "data_offset": 0, 00:10:24.995 "data_size": 0 00:10:24.995 }, 00:10:24.995 { 00:10:24.995 "name": "BaseBdev3", 00:10:24.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.995 "is_configured": false, 00:10:24.995 "data_offset": 0, 00:10:24.995 "data_size": 0 00:10:24.995 }, 00:10:24.995 { 00:10:24.995 "name": "BaseBdev4", 00:10:24.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.995 "is_configured": false, 00:10:24.995 "data_offset": 0, 00:10:24.995 "data_size": 0 00:10:24.995 } 00:10:24.995 ] 00:10:24.995 }' 00:10:24.995 14:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.995 14:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.255 14:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:25.255 14:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.255 14:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.255 [2024-12-09 14:43:03.274291] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:25.255 [2024-12-09 14:43:03.274512] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:25.255 14:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.255 14:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:25.255 14:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.255 14:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.255 [2024-12-09 14:43:03.286312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:25.255 [2024-12-09 14:43:03.288672] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:25.255 [2024-12-09 14:43:03.288769] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:25.255 [2024-12-09 14:43:03.288806] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:25.255 [2024-12-09 14:43:03.288837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:25.255 [2024-12-09 14:43:03.288929] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:25.255 [2024-12-09 14:43:03.288968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:25.255 14:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.255 14:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:25.255 14:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:25.255 14:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:25.255 14:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.255 14:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.255 14:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:25.255 14:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.255 14:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.255 14:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.255 14:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.255 14:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.255 14:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.255 14:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.255 14:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.255 14:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.255 14:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.255 14:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.255 14:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.255 "name": "Existed_Raid", 00:10:25.255 "uuid": "d399d8fc-8aa3-4dca-a563-10f49fdcc0cd", 00:10:25.255 "strip_size_kb": 64, 00:10:25.255 "state": "configuring", 00:10:25.255 "raid_level": "raid0", 00:10:25.255 "superblock": true, 00:10:25.255 "num_base_bdevs": 4, 00:10:25.255 "num_base_bdevs_discovered": 1, 00:10:25.255 "num_base_bdevs_operational": 4, 00:10:25.255 "base_bdevs_list": [ 00:10:25.255 { 00:10:25.255 "name": "BaseBdev1", 00:10:25.255 "uuid": "bbf48ffe-be3d-4541-a8f7-aa1bbf72614b", 00:10:25.255 "is_configured": true, 00:10:25.255 "data_offset": 2048, 00:10:25.255 "data_size": 63488 00:10:25.255 }, 00:10:25.255 { 00:10:25.255 "name": "BaseBdev2", 00:10:25.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.255 "is_configured": false, 00:10:25.255 "data_offset": 0, 00:10:25.255 "data_size": 0 00:10:25.255 }, 00:10:25.255 { 00:10:25.255 "name": "BaseBdev3", 00:10:25.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.255 "is_configured": false, 00:10:25.255 "data_offset": 0, 00:10:25.255 "data_size": 0 00:10:25.255 }, 00:10:25.255 { 00:10:25.255 "name": "BaseBdev4", 00:10:25.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.255 "is_configured": false, 00:10:25.255 "data_offset": 0, 00:10:25.255 "data_size": 0 00:10:25.255 } 00:10:25.255 ] 00:10:25.255 }' 00:10:25.255 14:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.255 14:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.825 14:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:25.825 14:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.825 14:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.825 [2024-12-09 14:43:03.797436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:25.825 BaseBdev2 00:10:25.825 14:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.825 14:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:25.825 14:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:25.825 14:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:25.825 14:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:25.825 14:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:25.825 14:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:25.825 14:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:25.825 14:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.825 14:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.825 14:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.825 14:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:25.825 14:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.825 14:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.825 [ 00:10:25.825 { 00:10:25.825 "name": "BaseBdev2", 00:10:25.825 "aliases": [ 00:10:25.825 "20b2e22b-c79c-448d-bcfc-b0e2fc3dc58a" 00:10:25.825 ], 00:10:25.825 "product_name": "Malloc disk", 00:10:25.825 "block_size": 512, 00:10:25.825 "num_blocks": 65536, 00:10:25.825 "uuid": "20b2e22b-c79c-448d-bcfc-b0e2fc3dc58a", 00:10:25.825 "assigned_rate_limits": { 00:10:25.825 "rw_ios_per_sec": 0, 00:10:25.825 "rw_mbytes_per_sec": 0, 00:10:25.825 "r_mbytes_per_sec": 0, 00:10:25.825 "w_mbytes_per_sec": 0 00:10:25.825 }, 00:10:25.825 "claimed": true, 00:10:25.825 "claim_type": "exclusive_write", 00:10:25.825 "zoned": false, 00:10:25.825 "supported_io_types": { 00:10:25.825 "read": true, 00:10:25.825 "write": true, 00:10:25.825 "unmap": true, 00:10:25.825 "flush": true, 00:10:25.825 "reset": true, 00:10:25.825 "nvme_admin": false, 00:10:25.825 "nvme_io": false, 00:10:25.825 "nvme_io_md": false, 00:10:25.825 "write_zeroes": true, 00:10:25.825 "zcopy": true, 00:10:25.825 "get_zone_info": false, 00:10:25.825 "zone_management": false, 00:10:25.825 "zone_append": false, 00:10:25.825 "compare": false, 00:10:25.825 "compare_and_write": false, 00:10:25.825 "abort": true, 00:10:25.825 "seek_hole": false, 00:10:25.825 "seek_data": false, 00:10:25.825 "copy": true, 00:10:25.825 "nvme_iov_md": false 00:10:25.825 }, 00:10:25.825 "memory_domains": [ 00:10:25.825 { 00:10:25.825 "dma_device_id": "system", 00:10:25.825 "dma_device_type": 1 00:10:25.825 }, 00:10:25.825 { 00:10:25.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.825 "dma_device_type": 2 00:10:25.825 } 00:10:25.825 ], 00:10:25.825 "driver_specific": {} 00:10:25.825 } 00:10:25.825 ] 00:10:25.825 14:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.825 14:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:25.825 14:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:25.825 14:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:25.825 14:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:25.825 14:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.825 14:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.825 14:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:25.825 14:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.825 14:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.825 14:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.825 14:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.825 14:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.825 14:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.825 14:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.825 14:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.825 14:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.825 14:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.825 14:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.825 14:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.825 "name": "Existed_Raid", 00:10:25.825 "uuid": "d399d8fc-8aa3-4dca-a563-10f49fdcc0cd", 00:10:25.825 "strip_size_kb": 64, 00:10:25.825 "state": "configuring", 00:10:25.825 "raid_level": "raid0", 00:10:25.825 "superblock": true, 00:10:25.825 "num_base_bdevs": 4, 00:10:25.825 "num_base_bdevs_discovered": 2, 00:10:25.825 "num_base_bdevs_operational": 4, 00:10:25.825 "base_bdevs_list": [ 00:10:25.825 { 00:10:25.825 "name": "BaseBdev1", 00:10:25.825 "uuid": "bbf48ffe-be3d-4541-a8f7-aa1bbf72614b", 00:10:25.825 "is_configured": true, 00:10:25.825 "data_offset": 2048, 00:10:25.825 "data_size": 63488 00:10:25.825 }, 00:10:25.825 { 00:10:25.825 "name": "BaseBdev2", 00:10:25.825 "uuid": "20b2e22b-c79c-448d-bcfc-b0e2fc3dc58a", 00:10:25.825 "is_configured": true, 00:10:25.825 "data_offset": 2048, 00:10:25.825 "data_size": 63488 00:10:25.825 }, 00:10:25.825 { 00:10:25.825 "name": "BaseBdev3", 00:10:25.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.825 "is_configured": false, 00:10:25.825 "data_offset": 0, 00:10:25.825 "data_size": 0 00:10:25.825 }, 00:10:25.825 { 00:10:25.825 "name": "BaseBdev4", 00:10:25.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.825 "is_configured": false, 00:10:25.825 "data_offset": 0, 00:10:25.825 "data_size": 0 00:10:25.825 } 00:10:25.825 ] 00:10:25.825 }' 00:10:25.825 14:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.825 14:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.394 14:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:26.394 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.394 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.394 [2024-12-09 14:43:04.326877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:26.394 BaseBdev3 00:10:26.394 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.394 14:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:26.394 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:26.394 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:26.394 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:26.394 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:26.394 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:26.394 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:26.394 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.394 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.394 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.394 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:26.394 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.394 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.394 [ 00:10:26.394 { 00:10:26.394 "name": "BaseBdev3", 00:10:26.394 "aliases": [ 00:10:26.394 "69d58ce3-c108-45cf-805c-317f27cb7723" 00:10:26.394 ], 00:10:26.394 "product_name": "Malloc disk", 00:10:26.394 "block_size": 512, 00:10:26.394 "num_blocks": 65536, 00:10:26.394 "uuid": "69d58ce3-c108-45cf-805c-317f27cb7723", 00:10:26.394 "assigned_rate_limits": { 00:10:26.394 "rw_ios_per_sec": 0, 00:10:26.394 "rw_mbytes_per_sec": 0, 00:10:26.394 "r_mbytes_per_sec": 0, 00:10:26.394 "w_mbytes_per_sec": 0 00:10:26.394 }, 00:10:26.394 "claimed": true, 00:10:26.394 "claim_type": "exclusive_write", 00:10:26.394 "zoned": false, 00:10:26.394 "supported_io_types": { 00:10:26.394 "read": true, 00:10:26.394 "write": true, 00:10:26.394 "unmap": true, 00:10:26.394 "flush": true, 00:10:26.394 "reset": true, 00:10:26.394 "nvme_admin": false, 00:10:26.394 "nvme_io": false, 00:10:26.394 "nvme_io_md": false, 00:10:26.394 "write_zeroes": true, 00:10:26.394 "zcopy": true, 00:10:26.394 "get_zone_info": false, 00:10:26.394 "zone_management": false, 00:10:26.394 "zone_append": false, 00:10:26.394 "compare": false, 00:10:26.394 "compare_and_write": false, 00:10:26.394 "abort": true, 00:10:26.394 "seek_hole": false, 00:10:26.394 "seek_data": false, 00:10:26.394 "copy": true, 00:10:26.394 "nvme_iov_md": false 00:10:26.394 }, 00:10:26.394 "memory_domains": [ 00:10:26.394 { 00:10:26.394 "dma_device_id": "system", 00:10:26.394 "dma_device_type": 1 00:10:26.394 }, 00:10:26.394 { 00:10:26.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.395 "dma_device_type": 2 00:10:26.395 } 00:10:26.395 ], 00:10:26.395 "driver_specific": {} 00:10:26.395 } 00:10:26.395 ] 00:10:26.395 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.395 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:26.395 14:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:26.395 14:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:26.395 14:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:26.395 14:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.395 14:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.395 14:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:26.395 14:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.395 14:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.395 14:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.395 14:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.395 14:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.395 14:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.395 14:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.395 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.395 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.395 14:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.395 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.395 14:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.395 "name": "Existed_Raid", 00:10:26.395 "uuid": "d399d8fc-8aa3-4dca-a563-10f49fdcc0cd", 00:10:26.395 "strip_size_kb": 64, 00:10:26.395 "state": "configuring", 00:10:26.395 "raid_level": "raid0", 00:10:26.395 "superblock": true, 00:10:26.395 "num_base_bdevs": 4, 00:10:26.395 "num_base_bdevs_discovered": 3, 00:10:26.395 "num_base_bdevs_operational": 4, 00:10:26.395 "base_bdevs_list": [ 00:10:26.395 { 00:10:26.395 "name": "BaseBdev1", 00:10:26.395 "uuid": "bbf48ffe-be3d-4541-a8f7-aa1bbf72614b", 00:10:26.395 "is_configured": true, 00:10:26.395 "data_offset": 2048, 00:10:26.395 "data_size": 63488 00:10:26.395 }, 00:10:26.395 { 00:10:26.395 "name": "BaseBdev2", 00:10:26.395 "uuid": "20b2e22b-c79c-448d-bcfc-b0e2fc3dc58a", 00:10:26.395 "is_configured": true, 00:10:26.395 "data_offset": 2048, 00:10:26.395 "data_size": 63488 00:10:26.395 }, 00:10:26.395 { 00:10:26.395 "name": "BaseBdev3", 00:10:26.395 "uuid": "69d58ce3-c108-45cf-805c-317f27cb7723", 00:10:26.395 "is_configured": true, 00:10:26.395 "data_offset": 2048, 00:10:26.395 "data_size": 63488 00:10:26.395 }, 00:10:26.395 { 00:10:26.395 "name": "BaseBdev4", 00:10:26.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.395 "is_configured": false, 00:10:26.395 "data_offset": 0, 00:10:26.395 "data_size": 0 00:10:26.395 } 00:10:26.395 ] 00:10:26.395 }' 00:10:26.395 14:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.395 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.965 14:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:26.965 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.965 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.965 [2024-12-09 14:43:04.852302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:26.965 [2024-12-09 14:43:04.852699] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:26.965 [2024-12-09 14:43:04.852721] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:26.965 BaseBdev4 00:10:26.965 [2024-12-09 14:43:04.853100] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:26.965 [2024-12-09 14:43:04.853310] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:26.965 [2024-12-09 14:43:04.853326] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:26.965 [2024-12-09 14:43:04.853526] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:26.965 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.965 14:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:26.965 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:26.965 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:26.965 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:26.965 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:26.965 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:26.965 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:26.965 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.965 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.965 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.965 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:26.965 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.965 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.965 [ 00:10:26.965 { 00:10:26.965 "name": "BaseBdev4", 00:10:26.965 "aliases": [ 00:10:26.965 "14198864-9816-4da4-b829-ebc9dd0502c1" 00:10:26.965 ], 00:10:26.965 "product_name": "Malloc disk", 00:10:26.965 "block_size": 512, 00:10:26.965 "num_blocks": 65536, 00:10:26.965 "uuid": "14198864-9816-4da4-b829-ebc9dd0502c1", 00:10:26.965 "assigned_rate_limits": { 00:10:26.965 "rw_ios_per_sec": 0, 00:10:26.965 "rw_mbytes_per_sec": 0, 00:10:26.965 "r_mbytes_per_sec": 0, 00:10:26.965 "w_mbytes_per_sec": 0 00:10:26.965 }, 00:10:26.965 "claimed": true, 00:10:26.965 "claim_type": "exclusive_write", 00:10:26.965 "zoned": false, 00:10:26.965 "supported_io_types": { 00:10:26.965 "read": true, 00:10:26.965 "write": true, 00:10:26.965 "unmap": true, 00:10:26.965 "flush": true, 00:10:26.965 "reset": true, 00:10:26.965 "nvme_admin": false, 00:10:26.965 "nvme_io": false, 00:10:26.965 "nvme_io_md": false, 00:10:26.965 "write_zeroes": true, 00:10:26.965 "zcopy": true, 00:10:26.965 "get_zone_info": false, 00:10:26.965 "zone_management": false, 00:10:26.965 "zone_append": false, 00:10:26.965 "compare": false, 00:10:26.965 "compare_and_write": false, 00:10:26.965 "abort": true, 00:10:26.965 "seek_hole": false, 00:10:26.965 "seek_data": false, 00:10:26.965 "copy": true, 00:10:26.965 "nvme_iov_md": false 00:10:26.965 }, 00:10:26.965 "memory_domains": [ 00:10:26.965 { 00:10:26.965 "dma_device_id": "system", 00:10:26.965 "dma_device_type": 1 00:10:26.965 }, 00:10:26.965 { 00:10:26.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.965 "dma_device_type": 2 00:10:26.965 } 00:10:26.965 ], 00:10:26.965 "driver_specific": {} 00:10:26.965 } 00:10:26.965 ] 00:10:26.965 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.965 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:26.965 14:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:26.965 14:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:26.965 14:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:26.965 14:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.965 14:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:26.965 14:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:26.965 14:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.965 14:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.965 14:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.965 14:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.965 14:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.965 14:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.965 14:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.965 14:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.965 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.965 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.965 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.965 14:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.965 "name": "Existed_Raid", 00:10:26.965 "uuid": "d399d8fc-8aa3-4dca-a563-10f49fdcc0cd", 00:10:26.965 "strip_size_kb": 64, 00:10:26.965 "state": "online", 00:10:26.965 "raid_level": "raid0", 00:10:26.965 "superblock": true, 00:10:26.965 "num_base_bdevs": 4, 00:10:26.965 "num_base_bdevs_discovered": 4, 00:10:26.965 "num_base_bdevs_operational": 4, 00:10:26.965 "base_bdevs_list": [ 00:10:26.965 { 00:10:26.965 "name": "BaseBdev1", 00:10:26.965 "uuid": "bbf48ffe-be3d-4541-a8f7-aa1bbf72614b", 00:10:26.965 "is_configured": true, 00:10:26.965 "data_offset": 2048, 00:10:26.965 "data_size": 63488 00:10:26.965 }, 00:10:26.965 { 00:10:26.965 "name": "BaseBdev2", 00:10:26.965 "uuid": "20b2e22b-c79c-448d-bcfc-b0e2fc3dc58a", 00:10:26.965 "is_configured": true, 00:10:26.965 "data_offset": 2048, 00:10:26.965 "data_size": 63488 00:10:26.965 }, 00:10:26.965 { 00:10:26.965 "name": "BaseBdev3", 00:10:26.965 "uuid": "69d58ce3-c108-45cf-805c-317f27cb7723", 00:10:26.965 "is_configured": true, 00:10:26.965 "data_offset": 2048, 00:10:26.965 "data_size": 63488 00:10:26.965 }, 00:10:26.965 { 00:10:26.965 "name": "BaseBdev4", 00:10:26.965 "uuid": "14198864-9816-4da4-b829-ebc9dd0502c1", 00:10:26.965 "is_configured": true, 00:10:26.965 "data_offset": 2048, 00:10:26.965 "data_size": 63488 00:10:26.965 } 00:10:26.965 ] 00:10:26.965 }' 00:10:26.965 14:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.965 14:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.224 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:27.224 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:27.224 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:27.484 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:27.484 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:27.484 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:27.484 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:27.484 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:27.484 14:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.484 14:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.484 [2024-12-09 14:43:05.360031] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:27.484 14:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.484 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:27.484 "name": "Existed_Raid", 00:10:27.484 "aliases": [ 00:10:27.484 "d399d8fc-8aa3-4dca-a563-10f49fdcc0cd" 00:10:27.484 ], 00:10:27.484 "product_name": "Raid Volume", 00:10:27.484 "block_size": 512, 00:10:27.484 "num_blocks": 253952, 00:10:27.484 "uuid": "d399d8fc-8aa3-4dca-a563-10f49fdcc0cd", 00:10:27.484 "assigned_rate_limits": { 00:10:27.484 "rw_ios_per_sec": 0, 00:10:27.484 "rw_mbytes_per_sec": 0, 00:10:27.484 "r_mbytes_per_sec": 0, 00:10:27.484 "w_mbytes_per_sec": 0 00:10:27.484 }, 00:10:27.484 "claimed": false, 00:10:27.484 "zoned": false, 00:10:27.484 "supported_io_types": { 00:10:27.484 "read": true, 00:10:27.484 "write": true, 00:10:27.484 "unmap": true, 00:10:27.484 "flush": true, 00:10:27.484 "reset": true, 00:10:27.484 "nvme_admin": false, 00:10:27.484 "nvme_io": false, 00:10:27.484 "nvme_io_md": false, 00:10:27.484 "write_zeroes": true, 00:10:27.484 "zcopy": false, 00:10:27.484 "get_zone_info": false, 00:10:27.484 "zone_management": false, 00:10:27.484 "zone_append": false, 00:10:27.484 "compare": false, 00:10:27.484 "compare_and_write": false, 00:10:27.484 "abort": false, 00:10:27.484 "seek_hole": false, 00:10:27.484 "seek_data": false, 00:10:27.484 "copy": false, 00:10:27.484 "nvme_iov_md": false 00:10:27.484 }, 00:10:27.484 "memory_domains": [ 00:10:27.484 { 00:10:27.484 "dma_device_id": "system", 00:10:27.484 "dma_device_type": 1 00:10:27.484 }, 00:10:27.484 { 00:10:27.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.484 "dma_device_type": 2 00:10:27.484 }, 00:10:27.484 { 00:10:27.484 "dma_device_id": "system", 00:10:27.484 "dma_device_type": 1 00:10:27.484 }, 00:10:27.484 { 00:10:27.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.484 "dma_device_type": 2 00:10:27.484 }, 00:10:27.484 { 00:10:27.484 "dma_device_id": "system", 00:10:27.484 "dma_device_type": 1 00:10:27.484 }, 00:10:27.484 { 00:10:27.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.484 "dma_device_type": 2 00:10:27.484 }, 00:10:27.484 { 00:10:27.484 "dma_device_id": "system", 00:10:27.484 "dma_device_type": 1 00:10:27.484 }, 00:10:27.484 { 00:10:27.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.484 "dma_device_type": 2 00:10:27.484 } 00:10:27.485 ], 00:10:27.485 "driver_specific": { 00:10:27.485 "raid": { 00:10:27.485 "uuid": "d399d8fc-8aa3-4dca-a563-10f49fdcc0cd", 00:10:27.485 "strip_size_kb": 64, 00:10:27.485 "state": "online", 00:10:27.485 "raid_level": "raid0", 00:10:27.485 "superblock": true, 00:10:27.485 "num_base_bdevs": 4, 00:10:27.485 "num_base_bdevs_discovered": 4, 00:10:27.485 "num_base_bdevs_operational": 4, 00:10:27.485 "base_bdevs_list": [ 00:10:27.485 { 00:10:27.485 "name": "BaseBdev1", 00:10:27.485 "uuid": "bbf48ffe-be3d-4541-a8f7-aa1bbf72614b", 00:10:27.485 "is_configured": true, 00:10:27.485 "data_offset": 2048, 00:10:27.485 "data_size": 63488 00:10:27.485 }, 00:10:27.485 { 00:10:27.485 "name": "BaseBdev2", 00:10:27.485 "uuid": "20b2e22b-c79c-448d-bcfc-b0e2fc3dc58a", 00:10:27.485 "is_configured": true, 00:10:27.485 "data_offset": 2048, 00:10:27.485 "data_size": 63488 00:10:27.485 }, 00:10:27.485 { 00:10:27.485 "name": "BaseBdev3", 00:10:27.485 "uuid": "69d58ce3-c108-45cf-805c-317f27cb7723", 00:10:27.485 "is_configured": true, 00:10:27.485 "data_offset": 2048, 00:10:27.485 "data_size": 63488 00:10:27.485 }, 00:10:27.485 { 00:10:27.485 "name": "BaseBdev4", 00:10:27.485 "uuid": "14198864-9816-4da4-b829-ebc9dd0502c1", 00:10:27.485 "is_configured": true, 00:10:27.485 "data_offset": 2048, 00:10:27.485 "data_size": 63488 00:10:27.485 } 00:10:27.485 ] 00:10:27.485 } 00:10:27.485 } 00:10:27.485 }' 00:10:27.485 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:27.485 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:27.485 BaseBdev2 00:10:27.485 BaseBdev3 00:10:27.485 BaseBdev4' 00:10:27.485 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.485 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:27.485 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.485 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:27.485 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.485 14:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.485 14:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.485 14:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.485 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.485 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.485 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.485 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:27.485 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.485 14:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.485 14:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.485 14:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.485 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.485 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.485 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.485 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:27.485 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.485 14:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.485 14:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.744 14:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.744 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.744 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.744 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.744 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:27.744 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.744 14:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.744 14:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.744 14:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.744 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.744 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.744 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:27.744 14:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.744 14:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.744 [2024-12-09 14:43:05.687151] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:27.744 [2024-12-09 14:43:05.687215] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:27.744 [2024-12-09 14:43:05.687289] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:27.744 14:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.744 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:27.744 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:27.744 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:27.744 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:27.744 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:27.744 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:27.744 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.744 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:27.744 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:27.744 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.744 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.744 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.744 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.744 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.744 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.744 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.744 14:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.744 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.744 14:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.744 14:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.004 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.004 "name": "Existed_Raid", 00:10:28.004 "uuid": "d399d8fc-8aa3-4dca-a563-10f49fdcc0cd", 00:10:28.004 "strip_size_kb": 64, 00:10:28.004 "state": "offline", 00:10:28.004 "raid_level": "raid0", 00:10:28.004 "superblock": true, 00:10:28.004 "num_base_bdevs": 4, 00:10:28.004 "num_base_bdevs_discovered": 3, 00:10:28.004 "num_base_bdevs_operational": 3, 00:10:28.004 "base_bdevs_list": [ 00:10:28.004 { 00:10:28.004 "name": null, 00:10:28.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.004 "is_configured": false, 00:10:28.004 "data_offset": 0, 00:10:28.004 "data_size": 63488 00:10:28.004 }, 00:10:28.004 { 00:10:28.004 "name": "BaseBdev2", 00:10:28.004 "uuid": "20b2e22b-c79c-448d-bcfc-b0e2fc3dc58a", 00:10:28.004 "is_configured": true, 00:10:28.004 "data_offset": 2048, 00:10:28.004 "data_size": 63488 00:10:28.004 }, 00:10:28.004 { 00:10:28.004 "name": "BaseBdev3", 00:10:28.004 "uuid": "69d58ce3-c108-45cf-805c-317f27cb7723", 00:10:28.004 "is_configured": true, 00:10:28.004 "data_offset": 2048, 00:10:28.004 "data_size": 63488 00:10:28.004 }, 00:10:28.004 { 00:10:28.004 "name": "BaseBdev4", 00:10:28.004 "uuid": "14198864-9816-4da4-b829-ebc9dd0502c1", 00:10:28.004 "is_configured": true, 00:10:28.004 "data_offset": 2048, 00:10:28.004 "data_size": 63488 00:10:28.004 } 00:10:28.004 ] 00:10:28.004 }' 00:10:28.004 14:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.004 14:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.261 14:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:28.261 14:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:28.261 14:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.261 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.261 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.261 14:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:28.261 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.261 14:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:28.261 14:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:28.261 14:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:28.261 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.261 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.261 [2024-12-09 14:43:06.302809] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:28.520 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.520 14:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:28.520 14:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:28.520 14:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.520 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.520 14:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:28.520 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.520 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.520 14:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:28.520 14:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:28.520 14:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:28.520 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.520 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.520 [2024-12-09 14:43:06.466228] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:28.520 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.520 14:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:28.520 14:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:28.520 14:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:28.520 14:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.520 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.520 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.520 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.520 14:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:28.520 14:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:28.520 14:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:28.520 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.520 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.520 [2024-12-09 14:43:06.627210] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:28.520 [2024-12-09 14:43:06.627394] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:28.780 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.780 14:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:28.780 14:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:28.780 14:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.780 14:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:28.780 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.780 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.780 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.780 14:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:28.780 14:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:28.780 14:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:28.780 14:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:28.780 14:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:28.780 14:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:28.780 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.780 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.780 BaseBdev2 00:10:28.780 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.780 14:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:28.780 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:28.780 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:28.780 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:28.780 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:28.780 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:28.780 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:28.780 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.780 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.780 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.780 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:28.780 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.780 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.780 [ 00:10:28.780 { 00:10:28.780 "name": "BaseBdev2", 00:10:28.780 "aliases": [ 00:10:28.780 "edbbb337-3c1e-45f0-af66-b53f91e84b28" 00:10:28.780 ], 00:10:28.780 "product_name": "Malloc disk", 00:10:28.780 "block_size": 512, 00:10:28.780 "num_blocks": 65536, 00:10:28.780 "uuid": "edbbb337-3c1e-45f0-af66-b53f91e84b28", 00:10:28.780 "assigned_rate_limits": { 00:10:28.780 "rw_ios_per_sec": 0, 00:10:28.780 "rw_mbytes_per_sec": 0, 00:10:28.780 "r_mbytes_per_sec": 0, 00:10:28.780 "w_mbytes_per_sec": 0 00:10:28.780 }, 00:10:28.780 "claimed": false, 00:10:28.780 "zoned": false, 00:10:28.780 "supported_io_types": { 00:10:28.780 "read": true, 00:10:28.780 "write": true, 00:10:28.780 "unmap": true, 00:10:28.780 "flush": true, 00:10:28.780 "reset": true, 00:10:28.780 "nvme_admin": false, 00:10:28.780 "nvme_io": false, 00:10:28.780 "nvme_io_md": false, 00:10:28.780 "write_zeroes": true, 00:10:28.780 "zcopy": true, 00:10:28.780 "get_zone_info": false, 00:10:28.780 "zone_management": false, 00:10:28.780 "zone_append": false, 00:10:28.780 "compare": false, 00:10:28.780 "compare_and_write": false, 00:10:28.780 "abort": true, 00:10:28.780 "seek_hole": false, 00:10:28.780 "seek_data": false, 00:10:28.780 "copy": true, 00:10:28.780 "nvme_iov_md": false 00:10:28.780 }, 00:10:28.780 "memory_domains": [ 00:10:28.780 { 00:10:28.780 "dma_device_id": "system", 00:10:28.780 "dma_device_type": 1 00:10:28.780 }, 00:10:28.780 { 00:10:28.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.780 "dma_device_type": 2 00:10:28.780 } 00:10:28.780 ], 00:10:28.780 "driver_specific": {} 00:10:28.780 } 00:10:28.780 ] 00:10:28.780 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.780 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:28.780 14:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:28.780 14:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:28.780 14:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:28.780 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.780 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.040 BaseBdev3 00:10:29.040 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.040 14:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:29.040 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:29.040 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:29.040 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:29.040 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:29.040 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:29.040 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:29.040 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.040 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.040 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.040 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:29.040 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.040 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.040 [ 00:10:29.040 { 00:10:29.040 "name": "BaseBdev3", 00:10:29.040 "aliases": [ 00:10:29.040 "7da4cc3c-5d2a-4981-8c6c-9238e0f31fbb" 00:10:29.040 ], 00:10:29.040 "product_name": "Malloc disk", 00:10:29.040 "block_size": 512, 00:10:29.040 "num_blocks": 65536, 00:10:29.040 "uuid": "7da4cc3c-5d2a-4981-8c6c-9238e0f31fbb", 00:10:29.040 "assigned_rate_limits": { 00:10:29.040 "rw_ios_per_sec": 0, 00:10:29.040 "rw_mbytes_per_sec": 0, 00:10:29.040 "r_mbytes_per_sec": 0, 00:10:29.040 "w_mbytes_per_sec": 0 00:10:29.040 }, 00:10:29.040 "claimed": false, 00:10:29.040 "zoned": false, 00:10:29.040 "supported_io_types": { 00:10:29.040 "read": true, 00:10:29.040 "write": true, 00:10:29.040 "unmap": true, 00:10:29.040 "flush": true, 00:10:29.040 "reset": true, 00:10:29.040 "nvme_admin": false, 00:10:29.040 "nvme_io": false, 00:10:29.040 "nvme_io_md": false, 00:10:29.040 "write_zeroes": true, 00:10:29.040 "zcopy": true, 00:10:29.040 "get_zone_info": false, 00:10:29.040 "zone_management": false, 00:10:29.040 "zone_append": false, 00:10:29.040 "compare": false, 00:10:29.040 "compare_and_write": false, 00:10:29.040 "abort": true, 00:10:29.040 "seek_hole": false, 00:10:29.040 "seek_data": false, 00:10:29.040 "copy": true, 00:10:29.040 "nvme_iov_md": false 00:10:29.040 }, 00:10:29.040 "memory_domains": [ 00:10:29.040 { 00:10:29.040 "dma_device_id": "system", 00:10:29.040 "dma_device_type": 1 00:10:29.040 }, 00:10:29.040 { 00:10:29.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.040 "dma_device_type": 2 00:10:29.040 } 00:10:29.040 ], 00:10:29.040 "driver_specific": {} 00:10:29.040 } 00:10:29.040 ] 00:10:29.040 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.040 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:29.040 14:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:29.040 14:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:29.040 14:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:29.040 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.040 14:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.040 BaseBdev4 00:10:29.040 14:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.040 14:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:29.040 14:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:29.040 14:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:29.040 14:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:29.040 14:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:29.040 14:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:29.040 14:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:29.040 14:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.040 14:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.040 14:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.040 14:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:29.040 14:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.040 14:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.040 [ 00:10:29.040 { 00:10:29.040 "name": "BaseBdev4", 00:10:29.040 "aliases": [ 00:10:29.040 "d0739ac2-7404-414a-b91a-02e6feee43e0" 00:10:29.040 ], 00:10:29.040 "product_name": "Malloc disk", 00:10:29.040 "block_size": 512, 00:10:29.040 "num_blocks": 65536, 00:10:29.040 "uuid": "d0739ac2-7404-414a-b91a-02e6feee43e0", 00:10:29.040 "assigned_rate_limits": { 00:10:29.040 "rw_ios_per_sec": 0, 00:10:29.040 "rw_mbytes_per_sec": 0, 00:10:29.040 "r_mbytes_per_sec": 0, 00:10:29.040 "w_mbytes_per_sec": 0 00:10:29.040 }, 00:10:29.040 "claimed": false, 00:10:29.040 "zoned": false, 00:10:29.040 "supported_io_types": { 00:10:29.040 "read": true, 00:10:29.040 "write": true, 00:10:29.040 "unmap": true, 00:10:29.040 "flush": true, 00:10:29.040 "reset": true, 00:10:29.040 "nvme_admin": false, 00:10:29.040 "nvme_io": false, 00:10:29.040 "nvme_io_md": false, 00:10:29.040 "write_zeroes": true, 00:10:29.040 "zcopy": true, 00:10:29.040 "get_zone_info": false, 00:10:29.040 "zone_management": false, 00:10:29.040 "zone_append": false, 00:10:29.040 "compare": false, 00:10:29.040 "compare_and_write": false, 00:10:29.040 "abort": true, 00:10:29.040 "seek_hole": false, 00:10:29.040 "seek_data": false, 00:10:29.040 "copy": true, 00:10:29.040 "nvme_iov_md": false 00:10:29.040 }, 00:10:29.040 "memory_domains": [ 00:10:29.040 { 00:10:29.041 "dma_device_id": "system", 00:10:29.041 "dma_device_type": 1 00:10:29.041 }, 00:10:29.041 { 00:10:29.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.041 "dma_device_type": 2 00:10:29.041 } 00:10:29.041 ], 00:10:29.041 "driver_specific": {} 00:10:29.041 } 00:10:29.041 ] 00:10:29.041 14:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.041 14:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:29.041 14:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:29.041 14:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:29.041 14:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:29.041 14:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.041 14:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.041 [2024-12-09 14:43:07.072149] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:29.041 [2024-12-09 14:43:07.072352] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:29.041 [2024-12-09 14:43:07.072387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:29.041 [2024-12-09 14:43:07.074727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:29.041 [2024-12-09 14:43:07.074808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:29.041 14:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.041 14:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:29.041 14:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.041 14:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.041 14:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:29.041 14:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.041 14:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.041 14:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.041 14:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.041 14:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.041 14:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.041 14:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.041 14:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.041 14:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.041 14:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.041 14:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.041 14:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.041 "name": "Existed_Raid", 00:10:29.041 "uuid": "8937c840-0461-495e-a8cb-37ec5d398844", 00:10:29.041 "strip_size_kb": 64, 00:10:29.041 "state": "configuring", 00:10:29.041 "raid_level": "raid0", 00:10:29.041 "superblock": true, 00:10:29.041 "num_base_bdevs": 4, 00:10:29.041 "num_base_bdevs_discovered": 3, 00:10:29.041 "num_base_bdevs_operational": 4, 00:10:29.041 "base_bdevs_list": [ 00:10:29.041 { 00:10:29.041 "name": "BaseBdev1", 00:10:29.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.041 "is_configured": false, 00:10:29.041 "data_offset": 0, 00:10:29.041 "data_size": 0 00:10:29.041 }, 00:10:29.041 { 00:10:29.041 "name": "BaseBdev2", 00:10:29.041 "uuid": "edbbb337-3c1e-45f0-af66-b53f91e84b28", 00:10:29.041 "is_configured": true, 00:10:29.041 "data_offset": 2048, 00:10:29.041 "data_size": 63488 00:10:29.041 }, 00:10:29.041 { 00:10:29.041 "name": "BaseBdev3", 00:10:29.041 "uuid": "7da4cc3c-5d2a-4981-8c6c-9238e0f31fbb", 00:10:29.041 "is_configured": true, 00:10:29.041 "data_offset": 2048, 00:10:29.041 "data_size": 63488 00:10:29.041 }, 00:10:29.041 { 00:10:29.041 "name": "BaseBdev4", 00:10:29.041 "uuid": "d0739ac2-7404-414a-b91a-02e6feee43e0", 00:10:29.041 "is_configured": true, 00:10:29.041 "data_offset": 2048, 00:10:29.041 "data_size": 63488 00:10:29.041 } 00:10:29.041 ] 00:10:29.041 }' 00:10:29.041 14:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.041 14:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.609 14:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:29.609 14:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.609 14:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.609 [2024-12-09 14:43:07.527701] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:29.609 14:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.609 14:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:29.609 14:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.609 14:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.609 14:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:29.609 14:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.609 14:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.609 14:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.609 14:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.609 14:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.609 14:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.609 14:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.609 14:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.609 14:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.609 14:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.609 14:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.609 14:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.609 "name": "Existed_Raid", 00:10:29.609 "uuid": "8937c840-0461-495e-a8cb-37ec5d398844", 00:10:29.609 "strip_size_kb": 64, 00:10:29.609 "state": "configuring", 00:10:29.609 "raid_level": "raid0", 00:10:29.609 "superblock": true, 00:10:29.609 "num_base_bdevs": 4, 00:10:29.609 "num_base_bdevs_discovered": 2, 00:10:29.609 "num_base_bdevs_operational": 4, 00:10:29.609 "base_bdevs_list": [ 00:10:29.609 { 00:10:29.609 "name": "BaseBdev1", 00:10:29.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.609 "is_configured": false, 00:10:29.609 "data_offset": 0, 00:10:29.609 "data_size": 0 00:10:29.609 }, 00:10:29.609 { 00:10:29.609 "name": null, 00:10:29.609 "uuid": "edbbb337-3c1e-45f0-af66-b53f91e84b28", 00:10:29.609 "is_configured": false, 00:10:29.609 "data_offset": 0, 00:10:29.609 "data_size": 63488 00:10:29.609 }, 00:10:29.609 { 00:10:29.609 "name": "BaseBdev3", 00:10:29.609 "uuid": "7da4cc3c-5d2a-4981-8c6c-9238e0f31fbb", 00:10:29.609 "is_configured": true, 00:10:29.609 "data_offset": 2048, 00:10:29.609 "data_size": 63488 00:10:29.609 }, 00:10:29.609 { 00:10:29.609 "name": "BaseBdev4", 00:10:29.610 "uuid": "d0739ac2-7404-414a-b91a-02e6feee43e0", 00:10:29.610 "is_configured": true, 00:10:29.610 "data_offset": 2048, 00:10:29.610 "data_size": 63488 00:10:29.610 } 00:10:29.610 ] 00:10:29.610 }' 00:10:29.610 14:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.610 14:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.178 14:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.178 14:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.178 14:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:30.178 14:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.178 14:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.178 14:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:30.178 14:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:30.178 14:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.178 14:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.178 [2024-12-09 14:43:08.092207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:30.178 BaseBdev1 00:10:30.178 14:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.178 14:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:30.178 14:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:30.178 14:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:30.178 14:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:30.178 14:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:30.178 14:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:30.178 14:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:30.178 14:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.178 14:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.178 14:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.178 14:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:30.178 14:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.178 14:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.178 [ 00:10:30.178 { 00:10:30.178 "name": "BaseBdev1", 00:10:30.178 "aliases": [ 00:10:30.178 "4a1298c5-6574-473d-9de9-9bb3116cb285" 00:10:30.178 ], 00:10:30.178 "product_name": "Malloc disk", 00:10:30.178 "block_size": 512, 00:10:30.178 "num_blocks": 65536, 00:10:30.178 "uuid": "4a1298c5-6574-473d-9de9-9bb3116cb285", 00:10:30.178 "assigned_rate_limits": { 00:10:30.178 "rw_ios_per_sec": 0, 00:10:30.178 "rw_mbytes_per_sec": 0, 00:10:30.178 "r_mbytes_per_sec": 0, 00:10:30.178 "w_mbytes_per_sec": 0 00:10:30.178 }, 00:10:30.178 "claimed": true, 00:10:30.178 "claim_type": "exclusive_write", 00:10:30.178 "zoned": false, 00:10:30.178 "supported_io_types": { 00:10:30.179 "read": true, 00:10:30.179 "write": true, 00:10:30.179 "unmap": true, 00:10:30.179 "flush": true, 00:10:30.179 "reset": true, 00:10:30.179 "nvme_admin": false, 00:10:30.179 "nvme_io": false, 00:10:30.179 "nvme_io_md": false, 00:10:30.179 "write_zeroes": true, 00:10:30.179 "zcopy": true, 00:10:30.179 "get_zone_info": false, 00:10:30.179 "zone_management": false, 00:10:30.179 "zone_append": false, 00:10:30.179 "compare": false, 00:10:30.179 "compare_and_write": false, 00:10:30.179 "abort": true, 00:10:30.179 "seek_hole": false, 00:10:30.179 "seek_data": false, 00:10:30.179 "copy": true, 00:10:30.179 "nvme_iov_md": false 00:10:30.179 }, 00:10:30.179 "memory_domains": [ 00:10:30.179 { 00:10:30.179 "dma_device_id": "system", 00:10:30.179 "dma_device_type": 1 00:10:30.179 }, 00:10:30.179 { 00:10:30.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.179 "dma_device_type": 2 00:10:30.179 } 00:10:30.179 ], 00:10:30.179 "driver_specific": {} 00:10:30.179 } 00:10:30.179 ] 00:10:30.179 14:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.179 14:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:30.179 14:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:30.179 14:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.179 14:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.179 14:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:30.179 14:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.179 14:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.179 14:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.179 14:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.179 14:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.179 14:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.179 14:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.179 14:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.179 14:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.179 14:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.179 14:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.179 14:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.179 "name": "Existed_Raid", 00:10:30.179 "uuid": "8937c840-0461-495e-a8cb-37ec5d398844", 00:10:30.179 "strip_size_kb": 64, 00:10:30.179 "state": "configuring", 00:10:30.179 "raid_level": "raid0", 00:10:30.179 "superblock": true, 00:10:30.179 "num_base_bdevs": 4, 00:10:30.179 "num_base_bdevs_discovered": 3, 00:10:30.179 "num_base_bdevs_operational": 4, 00:10:30.179 "base_bdevs_list": [ 00:10:30.179 { 00:10:30.179 "name": "BaseBdev1", 00:10:30.179 "uuid": "4a1298c5-6574-473d-9de9-9bb3116cb285", 00:10:30.179 "is_configured": true, 00:10:30.179 "data_offset": 2048, 00:10:30.179 "data_size": 63488 00:10:30.179 }, 00:10:30.179 { 00:10:30.179 "name": null, 00:10:30.179 "uuid": "edbbb337-3c1e-45f0-af66-b53f91e84b28", 00:10:30.179 "is_configured": false, 00:10:30.179 "data_offset": 0, 00:10:30.179 "data_size": 63488 00:10:30.179 }, 00:10:30.179 { 00:10:30.179 "name": "BaseBdev3", 00:10:30.179 "uuid": "7da4cc3c-5d2a-4981-8c6c-9238e0f31fbb", 00:10:30.179 "is_configured": true, 00:10:30.179 "data_offset": 2048, 00:10:30.179 "data_size": 63488 00:10:30.179 }, 00:10:30.179 { 00:10:30.179 "name": "BaseBdev4", 00:10:30.179 "uuid": "d0739ac2-7404-414a-b91a-02e6feee43e0", 00:10:30.179 "is_configured": true, 00:10:30.179 "data_offset": 2048, 00:10:30.179 "data_size": 63488 00:10:30.179 } 00:10:30.179 ] 00:10:30.179 }' 00:10:30.179 14:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.179 14:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.749 14:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.749 14:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:30.749 14:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.749 14:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.749 14:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.749 14:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:30.749 14:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:30.749 14:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.749 14:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.749 [2024-12-09 14:43:08.631467] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:30.749 14:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.749 14:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:30.749 14:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.749 14:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.749 14:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:30.749 14:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.749 14:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.749 14:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.749 14:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.749 14:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.749 14:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.749 14:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.749 14:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.749 14:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.749 14:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.749 14:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.749 14:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.749 "name": "Existed_Raid", 00:10:30.749 "uuid": "8937c840-0461-495e-a8cb-37ec5d398844", 00:10:30.749 "strip_size_kb": 64, 00:10:30.749 "state": "configuring", 00:10:30.749 "raid_level": "raid0", 00:10:30.749 "superblock": true, 00:10:30.749 "num_base_bdevs": 4, 00:10:30.749 "num_base_bdevs_discovered": 2, 00:10:30.749 "num_base_bdevs_operational": 4, 00:10:30.749 "base_bdevs_list": [ 00:10:30.749 { 00:10:30.749 "name": "BaseBdev1", 00:10:30.749 "uuid": "4a1298c5-6574-473d-9de9-9bb3116cb285", 00:10:30.749 "is_configured": true, 00:10:30.749 "data_offset": 2048, 00:10:30.749 "data_size": 63488 00:10:30.749 }, 00:10:30.749 { 00:10:30.749 "name": null, 00:10:30.749 "uuid": "edbbb337-3c1e-45f0-af66-b53f91e84b28", 00:10:30.749 "is_configured": false, 00:10:30.749 "data_offset": 0, 00:10:30.749 "data_size": 63488 00:10:30.749 }, 00:10:30.749 { 00:10:30.749 "name": null, 00:10:30.749 "uuid": "7da4cc3c-5d2a-4981-8c6c-9238e0f31fbb", 00:10:30.749 "is_configured": false, 00:10:30.749 "data_offset": 0, 00:10:30.749 "data_size": 63488 00:10:30.749 }, 00:10:30.749 { 00:10:30.749 "name": "BaseBdev4", 00:10:30.749 "uuid": "d0739ac2-7404-414a-b91a-02e6feee43e0", 00:10:30.749 "is_configured": true, 00:10:30.749 "data_offset": 2048, 00:10:30.749 "data_size": 63488 00:10:30.749 } 00:10:30.749 ] 00:10:30.749 }' 00:10:30.749 14:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.749 14:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.009 14:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.009 14:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:31.009 14:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.009 14:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.293 14:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.293 14:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:31.293 14:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:31.293 14:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.293 14:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.293 [2024-12-09 14:43:09.178613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:31.293 14:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.293 14:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:31.293 14:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.293 14:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.293 14:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:31.293 14:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.293 14:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.293 14:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.293 14:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.293 14:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.293 14:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.293 14:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.293 14:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.293 14:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.293 14:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.293 14:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.293 14:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.293 "name": "Existed_Raid", 00:10:31.293 "uuid": "8937c840-0461-495e-a8cb-37ec5d398844", 00:10:31.293 "strip_size_kb": 64, 00:10:31.293 "state": "configuring", 00:10:31.293 "raid_level": "raid0", 00:10:31.293 "superblock": true, 00:10:31.293 "num_base_bdevs": 4, 00:10:31.293 "num_base_bdevs_discovered": 3, 00:10:31.293 "num_base_bdevs_operational": 4, 00:10:31.293 "base_bdevs_list": [ 00:10:31.293 { 00:10:31.293 "name": "BaseBdev1", 00:10:31.293 "uuid": "4a1298c5-6574-473d-9de9-9bb3116cb285", 00:10:31.293 "is_configured": true, 00:10:31.293 "data_offset": 2048, 00:10:31.293 "data_size": 63488 00:10:31.293 }, 00:10:31.293 { 00:10:31.293 "name": null, 00:10:31.293 "uuid": "edbbb337-3c1e-45f0-af66-b53f91e84b28", 00:10:31.293 "is_configured": false, 00:10:31.293 "data_offset": 0, 00:10:31.293 "data_size": 63488 00:10:31.294 }, 00:10:31.294 { 00:10:31.294 "name": "BaseBdev3", 00:10:31.294 "uuid": "7da4cc3c-5d2a-4981-8c6c-9238e0f31fbb", 00:10:31.294 "is_configured": true, 00:10:31.294 "data_offset": 2048, 00:10:31.294 "data_size": 63488 00:10:31.294 }, 00:10:31.294 { 00:10:31.294 "name": "BaseBdev4", 00:10:31.294 "uuid": "d0739ac2-7404-414a-b91a-02e6feee43e0", 00:10:31.294 "is_configured": true, 00:10:31.294 "data_offset": 2048, 00:10:31.294 "data_size": 63488 00:10:31.294 } 00:10:31.294 ] 00:10:31.294 }' 00:10:31.294 14:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.294 14:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.559 14:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:31.559 14:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.559 14:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.559 14:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.559 14:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.559 14:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:31.559 14:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:31.559 14:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.559 14:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.559 [2024-12-09 14:43:09.593936] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:31.819 14:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.819 14:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:31.819 14:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.819 14:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.819 14:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:31.819 14:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.819 14:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.819 14:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.819 14:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.819 14:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.819 14:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.819 14:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.819 14:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.819 14:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.819 14:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.819 14:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.819 14:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.819 "name": "Existed_Raid", 00:10:31.819 "uuid": "8937c840-0461-495e-a8cb-37ec5d398844", 00:10:31.819 "strip_size_kb": 64, 00:10:31.819 "state": "configuring", 00:10:31.819 "raid_level": "raid0", 00:10:31.819 "superblock": true, 00:10:31.819 "num_base_bdevs": 4, 00:10:31.819 "num_base_bdevs_discovered": 2, 00:10:31.819 "num_base_bdevs_operational": 4, 00:10:31.819 "base_bdevs_list": [ 00:10:31.819 { 00:10:31.819 "name": null, 00:10:31.819 "uuid": "4a1298c5-6574-473d-9de9-9bb3116cb285", 00:10:31.819 "is_configured": false, 00:10:31.819 "data_offset": 0, 00:10:31.819 "data_size": 63488 00:10:31.819 }, 00:10:31.819 { 00:10:31.819 "name": null, 00:10:31.819 "uuid": "edbbb337-3c1e-45f0-af66-b53f91e84b28", 00:10:31.819 "is_configured": false, 00:10:31.819 "data_offset": 0, 00:10:31.819 "data_size": 63488 00:10:31.819 }, 00:10:31.819 { 00:10:31.819 "name": "BaseBdev3", 00:10:31.819 "uuid": "7da4cc3c-5d2a-4981-8c6c-9238e0f31fbb", 00:10:31.819 "is_configured": true, 00:10:31.819 "data_offset": 2048, 00:10:31.819 "data_size": 63488 00:10:31.819 }, 00:10:31.819 { 00:10:31.819 "name": "BaseBdev4", 00:10:31.819 "uuid": "d0739ac2-7404-414a-b91a-02e6feee43e0", 00:10:31.819 "is_configured": true, 00:10:31.819 "data_offset": 2048, 00:10:31.820 "data_size": 63488 00:10:31.820 } 00:10:31.820 ] 00:10:31.820 }' 00:10:31.820 14:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.820 14:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.079 14:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:32.079 14:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.079 14:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.079 14:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.079 14:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.079 14:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:32.079 14:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:32.079 14:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.079 14:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.079 [2024-12-09 14:43:10.180694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:32.079 14:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.079 14:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:32.079 14:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.079 14:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.079 14:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.079 14:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.079 14:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.079 14:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.079 14:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.079 14:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.079 14:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.079 14:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.079 14:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.079 14:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.079 14:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.338 14:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.338 14:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.338 "name": "Existed_Raid", 00:10:32.338 "uuid": "8937c840-0461-495e-a8cb-37ec5d398844", 00:10:32.338 "strip_size_kb": 64, 00:10:32.338 "state": "configuring", 00:10:32.338 "raid_level": "raid0", 00:10:32.338 "superblock": true, 00:10:32.338 "num_base_bdevs": 4, 00:10:32.338 "num_base_bdevs_discovered": 3, 00:10:32.338 "num_base_bdevs_operational": 4, 00:10:32.338 "base_bdevs_list": [ 00:10:32.338 { 00:10:32.338 "name": null, 00:10:32.338 "uuid": "4a1298c5-6574-473d-9de9-9bb3116cb285", 00:10:32.338 "is_configured": false, 00:10:32.338 "data_offset": 0, 00:10:32.338 "data_size": 63488 00:10:32.338 }, 00:10:32.338 { 00:10:32.338 "name": "BaseBdev2", 00:10:32.338 "uuid": "edbbb337-3c1e-45f0-af66-b53f91e84b28", 00:10:32.338 "is_configured": true, 00:10:32.338 "data_offset": 2048, 00:10:32.338 "data_size": 63488 00:10:32.338 }, 00:10:32.338 { 00:10:32.338 "name": "BaseBdev3", 00:10:32.338 "uuid": "7da4cc3c-5d2a-4981-8c6c-9238e0f31fbb", 00:10:32.338 "is_configured": true, 00:10:32.338 "data_offset": 2048, 00:10:32.338 "data_size": 63488 00:10:32.338 }, 00:10:32.338 { 00:10:32.338 "name": "BaseBdev4", 00:10:32.338 "uuid": "d0739ac2-7404-414a-b91a-02e6feee43e0", 00:10:32.338 "is_configured": true, 00:10:32.338 "data_offset": 2048, 00:10:32.338 "data_size": 63488 00:10:32.338 } 00:10:32.338 ] 00:10:32.338 }' 00:10:32.338 14:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.338 14:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.598 14:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:32.598 14:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.598 14:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.598 14:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.598 14:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.598 14:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:32.598 14:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:32.598 14:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.598 14:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.598 14:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.598 14:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.598 14:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4a1298c5-6574-473d-9de9-9bb3116cb285 00:10:32.598 14:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.598 14:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.598 [2024-12-09 14:43:10.665410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:32.598 [2024-12-09 14:43:10.665893] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:32.598 [2024-12-09 14:43:10.665954] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:32.598 [2024-12-09 14:43:10.666312] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:32.598 NewBaseBdev 00:10:32.598 [2024-12-09 14:43:10.666529] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:32.598 [2024-12-09 14:43:10.666546] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:32.598 [2024-12-09 14:43:10.666716] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:32.598 14:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.598 14:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:32.598 14:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:32.598 14:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:32.598 14:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:32.598 14:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:32.598 14:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:32.598 14:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:32.598 14:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.598 14:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.598 14:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.598 14:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:32.598 14:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.598 14:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.598 [ 00:10:32.598 { 00:10:32.598 "name": "NewBaseBdev", 00:10:32.598 "aliases": [ 00:10:32.598 "4a1298c5-6574-473d-9de9-9bb3116cb285" 00:10:32.598 ], 00:10:32.598 "product_name": "Malloc disk", 00:10:32.598 "block_size": 512, 00:10:32.598 "num_blocks": 65536, 00:10:32.598 "uuid": "4a1298c5-6574-473d-9de9-9bb3116cb285", 00:10:32.598 "assigned_rate_limits": { 00:10:32.598 "rw_ios_per_sec": 0, 00:10:32.598 "rw_mbytes_per_sec": 0, 00:10:32.598 "r_mbytes_per_sec": 0, 00:10:32.598 "w_mbytes_per_sec": 0 00:10:32.598 }, 00:10:32.598 "claimed": true, 00:10:32.598 "claim_type": "exclusive_write", 00:10:32.598 "zoned": false, 00:10:32.598 "supported_io_types": { 00:10:32.598 "read": true, 00:10:32.598 "write": true, 00:10:32.598 "unmap": true, 00:10:32.598 "flush": true, 00:10:32.598 "reset": true, 00:10:32.598 "nvme_admin": false, 00:10:32.598 "nvme_io": false, 00:10:32.598 "nvme_io_md": false, 00:10:32.598 "write_zeroes": true, 00:10:32.598 "zcopy": true, 00:10:32.598 "get_zone_info": false, 00:10:32.598 "zone_management": false, 00:10:32.598 "zone_append": false, 00:10:32.598 "compare": false, 00:10:32.598 "compare_and_write": false, 00:10:32.598 "abort": true, 00:10:32.598 "seek_hole": false, 00:10:32.598 "seek_data": false, 00:10:32.598 "copy": true, 00:10:32.598 "nvme_iov_md": false 00:10:32.598 }, 00:10:32.598 "memory_domains": [ 00:10:32.599 { 00:10:32.599 "dma_device_id": "system", 00:10:32.599 "dma_device_type": 1 00:10:32.599 }, 00:10:32.599 { 00:10:32.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.599 "dma_device_type": 2 00:10:32.599 } 00:10:32.599 ], 00:10:32.599 "driver_specific": {} 00:10:32.599 } 00:10:32.599 ] 00:10:32.599 14:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.599 14:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:32.599 14:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:32.599 14:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.599 14:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:32.599 14:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.599 14:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.599 14:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.599 14:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.599 14:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.599 14:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.599 14:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.599 14:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.599 14:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.599 14:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.599 14:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.859 14:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.859 14:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.859 "name": "Existed_Raid", 00:10:32.859 "uuid": "8937c840-0461-495e-a8cb-37ec5d398844", 00:10:32.859 "strip_size_kb": 64, 00:10:32.859 "state": "online", 00:10:32.859 "raid_level": "raid0", 00:10:32.859 "superblock": true, 00:10:32.859 "num_base_bdevs": 4, 00:10:32.859 "num_base_bdevs_discovered": 4, 00:10:32.859 "num_base_bdevs_operational": 4, 00:10:32.859 "base_bdevs_list": [ 00:10:32.859 { 00:10:32.859 "name": "NewBaseBdev", 00:10:32.859 "uuid": "4a1298c5-6574-473d-9de9-9bb3116cb285", 00:10:32.859 "is_configured": true, 00:10:32.859 "data_offset": 2048, 00:10:32.859 "data_size": 63488 00:10:32.859 }, 00:10:32.859 { 00:10:32.859 "name": "BaseBdev2", 00:10:32.859 "uuid": "edbbb337-3c1e-45f0-af66-b53f91e84b28", 00:10:32.859 "is_configured": true, 00:10:32.859 "data_offset": 2048, 00:10:32.859 "data_size": 63488 00:10:32.859 }, 00:10:32.859 { 00:10:32.859 "name": "BaseBdev3", 00:10:32.859 "uuid": "7da4cc3c-5d2a-4981-8c6c-9238e0f31fbb", 00:10:32.859 "is_configured": true, 00:10:32.859 "data_offset": 2048, 00:10:32.859 "data_size": 63488 00:10:32.859 }, 00:10:32.859 { 00:10:32.859 "name": "BaseBdev4", 00:10:32.859 "uuid": "d0739ac2-7404-414a-b91a-02e6feee43e0", 00:10:32.859 "is_configured": true, 00:10:32.859 "data_offset": 2048, 00:10:32.859 "data_size": 63488 00:10:32.859 } 00:10:32.859 ] 00:10:32.859 }' 00:10:32.859 14:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.859 14:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.119 14:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:33.119 14:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:33.119 14:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:33.119 14:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:33.119 14:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:33.119 14:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:33.119 14:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:33.119 14:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.119 14:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.119 14:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:33.119 [2024-12-09 14:43:11.173104] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:33.119 14:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.119 14:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:33.119 "name": "Existed_Raid", 00:10:33.119 "aliases": [ 00:10:33.119 "8937c840-0461-495e-a8cb-37ec5d398844" 00:10:33.119 ], 00:10:33.119 "product_name": "Raid Volume", 00:10:33.119 "block_size": 512, 00:10:33.119 "num_blocks": 253952, 00:10:33.119 "uuid": "8937c840-0461-495e-a8cb-37ec5d398844", 00:10:33.119 "assigned_rate_limits": { 00:10:33.119 "rw_ios_per_sec": 0, 00:10:33.119 "rw_mbytes_per_sec": 0, 00:10:33.119 "r_mbytes_per_sec": 0, 00:10:33.119 "w_mbytes_per_sec": 0 00:10:33.119 }, 00:10:33.119 "claimed": false, 00:10:33.119 "zoned": false, 00:10:33.119 "supported_io_types": { 00:10:33.119 "read": true, 00:10:33.119 "write": true, 00:10:33.119 "unmap": true, 00:10:33.119 "flush": true, 00:10:33.119 "reset": true, 00:10:33.119 "nvme_admin": false, 00:10:33.119 "nvme_io": false, 00:10:33.119 "nvme_io_md": false, 00:10:33.119 "write_zeroes": true, 00:10:33.119 "zcopy": false, 00:10:33.119 "get_zone_info": false, 00:10:33.119 "zone_management": false, 00:10:33.119 "zone_append": false, 00:10:33.119 "compare": false, 00:10:33.119 "compare_and_write": false, 00:10:33.119 "abort": false, 00:10:33.119 "seek_hole": false, 00:10:33.119 "seek_data": false, 00:10:33.119 "copy": false, 00:10:33.119 "nvme_iov_md": false 00:10:33.119 }, 00:10:33.119 "memory_domains": [ 00:10:33.119 { 00:10:33.119 "dma_device_id": "system", 00:10:33.119 "dma_device_type": 1 00:10:33.119 }, 00:10:33.119 { 00:10:33.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.119 "dma_device_type": 2 00:10:33.119 }, 00:10:33.119 { 00:10:33.119 "dma_device_id": "system", 00:10:33.119 "dma_device_type": 1 00:10:33.119 }, 00:10:33.119 { 00:10:33.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.119 "dma_device_type": 2 00:10:33.119 }, 00:10:33.119 { 00:10:33.119 "dma_device_id": "system", 00:10:33.119 "dma_device_type": 1 00:10:33.119 }, 00:10:33.119 { 00:10:33.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.119 "dma_device_type": 2 00:10:33.119 }, 00:10:33.119 { 00:10:33.119 "dma_device_id": "system", 00:10:33.119 "dma_device_type": 1 00:10:33.119 }, 00:10:33.119 { 00:10:33.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.119 "dma_device_type": 2 00:10:33.119 } 00:10:33.119 ], 00:10:33.119 "driver_specific": { 00:10:33.119 "raid": { 00:10:33.119 "uuid": "8937c840-0461-495e-a8cb-37ec5d398844", 00:10:33.119 "strip_size_kb": 64, 00:10:33.119 "state": "online", 00:10:33.119 "raid_level": "raid0", 00:10:33.119 "superblock": true, 00:10:33.119 "num_base_bdevs": 4, 00:10:33.119 "num_base_bdevs_discovered": 4, 00:10:33.119 "num_base_bdevs_operational": 4, 00:10:33.119 "base_bdevs_list": [ 00:10:33.119 { 00:10:33.119 "name": "NewBaseBdev", 00:10:33.119 "uuid": "4a1298c5-6574-473d-9de9-9bb3116cb285", 00:10:33.119 "is_configured": true, 00:10:33.119 "data_offset": 2048, 00:10:33.119 "data_size": 63488 00:10:33.119 }, 00:10:33.119 { 00:10:33.119 "name": "BaseBdev2", 00:10:33.119 "uuid": "edbbb337-3c1e-45f0-af66-b53f91e84b28", 00:10:33.119 "is_configured": true, 00:10:33.119 "data_offset": 2048, 00:10:33.119 "data_size": 63488 00:10:33.119 }, 00:10:33.119 { 00:10:33.119 "name": "BaseBdev3", 00:10:33.119 "uuid": "7da4cc3c-5d2a-4981-8c6c-9238e0f31fbb", 00:10:33.119 "is_configured": true, 00:10:33.119 "data_offset": 2048, 00:10:33.119 "data_size": 63488 00:10:33.119 }, 00:10:33.119 { 00:10:33.119 "name": "BaseBdev4", 00:10:33.119 "uuid": "d0739ac2-7404-414a-b91a-02e6feee43e0", 00:10:33.119 "is_configured": true, 00:10:33.119 "data_offset": 2048, 00:10:33.119 "data_size": 63488 00:10:33.119 } 00:10:33.119 ] 00:10:33.119 } 00:10:33.119 } 00:10:33.119 }' 00:10:33.119 14:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:33.379 14:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:33.379 BaseBdev2 00:10:33.379 BaseBdev3 00:10:33.379 BaseBdev4' 00:10:33.379 14:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.379 14:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:33.379 14:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.379 14:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:33.379 14:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.379 14:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.379 14:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.379 14:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.379 14:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.379 14:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.379 14:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.379 14:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:33.379 14:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.379 14:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.379 14:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.379 14:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.379 14:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.379 14:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.379 14:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.379 14:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:33.379 14:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.379 14:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.379 14:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.379 14:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.379 14:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.379 14:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.379 14:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.379 14:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:33.379 14:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.379 14:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.379 14:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.379 14:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.638 14:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.638 14:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.638 14:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:33.638 14:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.638 14:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.638 [2024-12-09 14:43:11.516064] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:33.638 [2024-12-09 14:43:11.516227] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:33.638 [2024-12-09 14:43:11.516360] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:33.638 [2024-12-09 14:43:11.516446] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:33.638 [2024-12-09 14:43:11.516459] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:33.638 14:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.638 14:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71329 00:10:33.638 14:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 71329 ']' 00:10:33.638 14:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 71329 00:10:33.638 14:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:33.638 14:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:33.638 14:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71329 00:10:33.638 14:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:33.638 14:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:33.638 14:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71329' 00:10:33.638 killing process with pid 71329 00:10:33.638 14:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 71329 00:10:33.638 [2024-12-09 14:43:11.560199] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:33.638 14:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 71329 00:10:34.208 [2024-12-09 14:43:12.025402] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:35.588 14:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:35.588 00:10:35.588 real 0m11.968s 00:10:35.588 user 0m18.537s 00:10:35.588 sys 0m2.234s 00:10:35.588 14:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:35.588 14:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.588 ************************************ 00:10:35.588 END TEST raid_state_function_test_sb 00:10:35.588 ************************************ 00:10:35.588 14:43:13 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:35.588 14:43:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:35.588 14:43:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:35.588 14:43:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:35.588 ************************************ 00:10:35.588 START TEST raid_superblock_test 00:10:35.588 ************************************ 00:10:35.588 14:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:10:35.588 14:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:35.588 14:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:35.588 14:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:35.588 14:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:35.588 14:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:35.588 14:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:35.588 14:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:35.588 14:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:35.588 14:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:35.588 14:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:35.588 14:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:35.588 14:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:35.588 14:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:35.588 14:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:35.588 14:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:35.588 14:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:35.588 14:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72005 00:10:35.588 14:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:35.588 14:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72005 00:10:35.588 14:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72005 ']' 00:10:35.588 14:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.588 14:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:35.588 14:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.588 14:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:35.588 14:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.588 [2024-12-09 14:43:13.508983] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:10:35.588 [2024-12-09 14:43:13.509195] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72005 ] 00:10:35.588 [2024-12-09 14:43:13.686354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.848 [2024-12-09 14:43:13.836869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.107 [2024-12-09 14:43:14.104535] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:36.107 [2024-12-09 14:43:14.104773] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:36.367 14:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:36.367 14:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:36.367 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:36.367 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:36.367 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:36.367 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:36.367 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:36.367 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:36.367 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:36.367 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:36.367 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:36.367 14:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.367 14:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.367 malloc1 00:10:36.367 14:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.367 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:36.367 14:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.367 14:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.367 [2024-12-09 14:43:14.423015] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:36.367 [2024-12-09 14:43:14.423217] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.367 [2024-12-09 14:43:14.423259] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:36.367 [2024-12-09 14:43:14.423274] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.367 [2024-12-09 14:43:14.426176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.367 [2024-12-09 14:43:14.426312] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:36.367 pt1 00:10:36.367 14:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.367 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:36.367 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:36.367 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:36.367 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:36.367 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:36.367 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:36.367 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:36.367 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:36.367 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:36.367 14:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.367 14:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.367 malloc2 00:10:36.367 14:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.627 [2024-12-09 14:43:14.494019] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:36.627 [2024-12-09 14:43:14.494214] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.627 [2024-12-09 14:43:14.494276] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:36.627 [2024-12-09 14:43:14.494319] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.627 [2024-12-09 14:43:14.497304] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.627 [2024-12-09 14:43:14.497446] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:36.627 pt2 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.627 malloc3 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.627 [2024-12-09 14:43:14.576142] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:36.627 [2024-12-09 14:43:14.576336] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.627 [2024-12-09 14:43:14.576380] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:36.627 [2024-12-09 14:43:14.576394] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.627 [2024-12-09 14:43:14.579441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.627 [2024-12-09 14:43:14.579604] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:36.627 pt3 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.627 malloc4 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.627 [2024-12-09 14:43:14.644994] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:36.627 [2024-12-09 14:43:14.645178] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.627 [2024-12-09 14:43:14.645233] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:36.627 [2024-12-09 14:43:14.645276] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.627 [2024-12-09 14:43:14.648028] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.627 [2024-12-09 14:43:14.648131] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:36.627 pt4 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.627 [2024-12-09 14:43:14.657035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:36.627 [2024-12-09 14:43:14.659322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:36.627 [2024-12-09 14:43:14.659487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:36.627 [2024-12-09 14:43:14.659587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:36.627 [2024-12-09 14:43:14.659836] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:36.627 [2024-12-09 14:43:14.659892] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:36.627 [2024-12-09 14:43:14.660219] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:36.627 [2024-12-09 14:43:14.660467] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:36.627 [2024-12-09 14:43:14.660524] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:36.627 [2024-12-09 14:43:14.660777] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.627 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.627 "name": "raid_bdev1", 00:10:36.627 "uuid": "0db5650d-edc5-4c5e-b38c-889696c9916e", 00:10:36.627 "strip_size_kb": 64, 00:10:36.627 "state": "online", 00:10:36.627 "raid_level": "raid0", 00:10:36.627 "superblock": true, 00:10:36.627 "num_base_bdevs": 4, 00:10:36.627 "num_base_bdevs_discovered": 4, 00:10:36.627 "num_base_bdevs_operational": 4, 00:10:36.627 "base_bdevs_list": [ 00:10:36.627 { 00:10:36.627 "name": "pt1", 00:10:36.627 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:36.627 "is_configured": true, 00:10:36.627 "data_offset": 2048, 00:10:36.628 "data_size": 63488 00:10:36.628 }, 00:10:36.628 { 00:10:36.628 "name": "pt2", 00:10:36.628 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:36.628 "is_configured": true, 00:10:36.628 "data_offset": 2048, 00:10:36.628 "data_size": 63488 00:10:36.628 }, 00:10:36.628 { 00:10:36.628 "name": "pt3", 00:10:36.628 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:36.628 "is_configured": true, 00:10:36.628 "data_offset": 2048, 00:10:36.628 "data_size": 63488 00:10:36.628 }, 00:10:36.628 { 00:10:36.628 "name": "pt4", 00:10:36.628 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:36.628 "is_configured": true, 00:10:36.628 "data_offset": 2048, 00:10:36.628 "data_size": 63488 00:10:36.628 } 00:10:36.628 ] 00:10:36.628 }' 00:10:36.628 14:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.628 14:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.196 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:37.196 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:37.196 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:37.196 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:37.196 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:37.196 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:37.196 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:37.196 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:37.196 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.196 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.196 [2024-12-09 14:43:15.128653] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:37.196 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.196 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:37.196 "name": "raid_bdev1", 00:10:37.196 "aliases": [ 00:10:37.196 "0db5650d-edc5-4c5e-b38c-889696c9916e" 00:10:37.196 ], 00:10:37.196 "product_name": "Raid Volume", 00:10:37.196 "block_size": 512, 00:10:37.196 "num_blocks": 253952, 00:10:37.196 "uuid": "0db5650d-edc5-4c5e-b38c-889696c9916e", 00:10:37.196 "assigned_rate_limits": { 00:10:37.196 "rw_ios_per_sec": 0, 00:10:37.196 "rw_mbytes_per_sec": 0, 00:10:37.196 "r_mbytes_per_sec": 0, 00:10:37.196 "w_mbytes_per_sec": 0 00:10:37.196 }, 00:10:37.196 "claimed": false, 00:10:37.196 "zoned": false, 00:10:37.196 "supported_io_types": { 00:10:37.196 "read": true, 00:10:37.196 "write": true, 00:10:37.196 "unmap": true, 00:10:37.196 "flush": true, 00:10:37.196 "reset": true, 00:10:37.196 "nvme_admin": false, 00:10:37.196 "nvme_io": false, 00:10:37.196 "nvme_io_md": false, 00:10:37.196 "write_zeroes": true, 00:10:37.196 "zcopy": false, 00:10:37.196 "get_zone_info": false, 00:10:37.196 "zone_management": false, 00:10:37.196 "zone_append": false, 00:10:37.196 "compare": false, 00:10:37.196 "compare_and_write": false, 00:10:37.196 "abort": false, 00:10:37.196 "seek_hole": false, 00:10:37.196 "seek_data": false, 00:10:37.196 "copy": false, 00:10:37.196 "nvme_iov_md": false 00:10:37.196 }, 00:10:37.196 "memory_domains": [ 00:10:37.196 { 00:10:37.196 "dma_device_id": "system", 00:10:37.196 "dma_device_type": 1 00:10:37.196 }, 00:10:37.196 { 00:10:37.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.196 "dma_device_type": 2 00:10:37.196 }, 00:10:37.196 { 00:10:37.196 "dma_device_id": "system", 00:10:37.196 "dma_device_type": 1 00:10:37.196 }, 00:10:37.196 { 00:10:37.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.196 "dma_device_type": 2 00:10:37.196 }, 00:10:37.196 { 00:10:37.196 "dma_device_id": "system", 00:10:37.196 "dma_device_type": 1 00:10:37.196 }, 00:10:37.196 { 00:10:37.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.196 "dma_device_type": 2 00:10:37.196 }, 00:10:37.196 { 00:10:37.196 "dma_device_id": "system", 00:10:37.196 "dma_device_type": 1 00:10:37.196 }, 00:10:37.196 { 00:10:37.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.196 "dma_device_type": 2 00:10:37.196 } 00:10:37.196 ], 00:10:37.196 "driver_specific": { 00:10:37.196 "raid": { 00:10:37.196 "uuid": "0db5650d-edc5-4c5e-b38c-889696c9916e", 00:10:37.196 "strip_size_kb": 64, 00:10:37.196 "state": "online", 00:10:37.196 "raid_level": "raid0", 00:10:37.196 "superblock": true, 00:10:37.196 "num_base_bdevs": 4, 00:10:37.196 "num_base_bdevs_discovered": 4, 00:10:37.196 "num_base_bdevs_operational": 4, 00:10:37.196 "base_bdevs_list": [ 00:10:37.196 { 00:10:37.196 "name": "pt1", 00:10:37.196 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:37.196 "is_configured": true, 00:10:37.196 "data_offset": 2048, 00:10:37.196 "data_size": 63488 00:10:37.196 }, 00:10:37.196 { 00:10:37.196 "name": "pt2", 00:10:37.197 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:37.197 "is_configured": true, 00:10:37.197 "data_offset": 2048, 00:10:37.197 "data_size": 63488 00:10:37.197 }, 00:10:37.197 { 00:10:37.197 "name": "pt3", 00:10:37.197 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:37.197 "is_configured": true, 00:10:37.197 "data_offset": 2048, 00:10:37.197 "data_size": 63488 00:10:37.197 }, 00:10:37.197 { 00:10:37.197 "name": "pt4", 00:10:37.197 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:37.197 "is_configured": true, 00:10:37.197 "data_offset": 2048, 00:10:37.197 "data_size": 63488 00:10:37.197 } 00:10:37.197 ] 00:10:37.197 } 00:10:37.197 } 00:10:37.197 }' 00:10:37.197 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:37.197 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:37.197 pt2 00:10:37.197 pt3 00:10:37.197 pt4' 00:10:37.197 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.197 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:37.197 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:37.197 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:37.197 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.197 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.197 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.197 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.197 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.197 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.197 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:37.197 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:37.197 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.197 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.197 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.197 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.457 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.457 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:37.458 [2024-12-09 14:43:15.424110] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0db5650d-edc5-4c5e-b38c-889696c9916e 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0db5650d-edc5-4c5e-b38c-889696c9916e ']' 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.458 [2024-12-09 14:43:15.475705] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:37.458 [2024-12-09 14:43:15.475750] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:37.458 [2024-12-09 14:43:15.475859] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:37.458 [2024-12-09 14:43:15.475946] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:37.458 [2024-12-09 14:43:15.475965] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.458 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.718 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:37.718 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.718 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:37.718 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.718 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.718 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:37.718 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:37.718 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:37.718 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:37.718 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:37.718 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:37.718 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:37.718 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:37.718 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:37.718 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.718 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.718 [2024-12-09 14:43:15.635493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:37.718 request: 00:10:37.718 { 00:10:37.718 [2024-12-09 14:43:15.638202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:37.718 [2024-12-09 14:43:15.638276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:37.718 [2024-12-09 14:43:15.638323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:37.718 [2024-12-09 14:43:15.638399] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:37.718 [2024-12-09 14:43:15.638476] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:37.718 [2024-12-09 14:43:15.638503] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:37.718 [2024-12-09 14:43:15.638529] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:37.719 [2024-12-09 14:43:15.638549] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:37.719 [2024-12-09 14:43:15.638585] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:37.719 "name": "raid_bdev1", 00:10:37.719 "raid_level": "raid0", 00:10:37.719 "base_bdevs": [ 00:10:37.719 "malloc1", 00:10:37.719 "malloc2", 00:10:37.719 "malloc3", 00:10:37.719 "malloc4" 00:10:37.719 ], 00:10:37.719 "strip_size_kb": 64, 00:10:37.719 "superblock": false, 00:10:37.719 "method": "bdev_raid_create", 00:10:37.719 "req_id": 1 00:10:37.719 } 00:10:37.719 Got JSON-RPC error response 00:10:37.719 response: 00:10:37.719 { 00:10:37.719 "code": -17, 00:10:37.719 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:37.719 } 00:10:37.719 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:37.719 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:37.719 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:37.719 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:37.719 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:37.719 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.719 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:37.719 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.719 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.719 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.719 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:37.719 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:37.719 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:37.719 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.719 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.719 [2024-12-09 14:43:15.699381] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:37.719 [2024-12-09 14:43:15.699586] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.719 [2024-12-09 14:43:15.699649] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:37.719 [2024-12-09 14:43:15.699691] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.719 [2024-12-09 14:43:15.702526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.719 [2024-12-09 14:43:15.702640] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:37.719 [2024-12-09 14:43:15.702786] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:37.719 [2024-12-09 14:43:15.702916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:37.719 pt1 00:10:37.719 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.719 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:37.719 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:37.719 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.719 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:37.719 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.719 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.719 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.719 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.719 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.719 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.719 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.719 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:37.719 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.719 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.719 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.719 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.719 "name": "raid_bdev1", 00:10:37.719 "uuid": "0db5650d-edc5-4c5e-b38c-889696c9916e", 00:10:37.719 "strip_size_kb": 64, 00:10:37.719 "state": "configuring", 00:10:37.719 "raid_level": "raid0", 00:10:37.719 "superblock": true, 00:10:37.719 "num_base_bdevs": 4, 00:10:37.719 "num_base_bdevs_discovered": 1, 00:10:37.719 "num_base_bdevs_operational": 4, 00:10:37.719 "base_bdevs_list": [ 00:10:37.719 { 00:10:37.719 "name": "pt1", 00:10:37.719 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:37.719 "is_configured": true, 00:10:37.719 "data_offset": 2048, 00:10:37.719 "data_size": 63488 00:10:37.719 }, 00:10:37.719 { 00:10:37.719 "name": null, 00:10:37.719 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:37.719 "is_configured": false, 00:10:37.719 "data_offset": 2048, 00:10:37.719 "data_size": 63488 00:10:37.719 }, 00:10:37.719 { 00:10:37.719 "name": null, 00:10:37.719 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:37.719 "is_configured": false, 00:10:37.719 "data_offset": 2048, 00:10:37.719 "data_size": 63488 00:10:37.719 }, 00:10:37.719 { 00:10:37.719 "name": null, 00:10:37.719 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:37.719 "is_configured": false, 00:10:37.719 "data_offset": 2048, 00:10:37.719 "data_size": 63488 00:10:37.719 } 00:10:37.719 ] 00:10:37.719 }' 00:10:37.719 14:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.719 14:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.288 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:38.288 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:38.288 14:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.288 14:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.288 [2024-12-09 14:43:16.126960] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:38.288 [2024-12-09 14:43:16.127099] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.288 [2024-12-09 14:43:16.127130] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:38.288 [2024-12-09 14:43:16.127148] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.288 [2024-12-09 14:43:16.127793] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.288 [2024-12-09 14:43:16.127831] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:38.288 [2024-12-09 14:43:16.127957] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:38.288 [2024-12-09 14:43:16.127994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:38.288 pt2 00:10:38.288 14:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.288 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:38.288 14:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.288 14:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.288 [2024-12-09 14:43:16.138910] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:38.288 14:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.288 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:38.288 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:38.288 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.288 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.288 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.288 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.288 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.288 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.288 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.288 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.288 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.288 14:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.288 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:38.288 14:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.288 14:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.288 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.288 "name": "raid_bdev1", 00:10:38.288 "uuid": "0db5650d-edc5-4c5e-b38c-889696c9916e", 00:10:38.288 "strip_size_kb": 64, 00:10:38.288 "state": "configuring", 00:10:38.288 "raid_level": "raid0", 00:10:38.288 "superblock": true, 00:10:38.288 "num_base_bdevs": 4, 00:10:38.288 "num_base_bdevs_discovered": 1, 00:10:38.288 "num_base_bdevs_operational": 4, 00:10:38.288 "base_bdevs_list": [ 00:10:38.288 { 00:10:38.288 "name": "pt1", 00:10:38.288 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:38.288 "is_configured": true, 00:10:38.288 "data_offset": 2048, 00:10:38.288 "data_size": 63488 00:10:38.288 }, 00:10:38.288 { 00:10:38.288 "name": null, 00:10:38.288 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:38.288 "is_configured": false, 00:10:38.288 "data_offset": 0, 00:10:38.288 "data_size": 63488 00:10:38.288 }, 00:10:38.289 { 00:10:38.289 "name": null, 00:10:38.289 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:38.289 "is_configured": false, 00:10:38.289 "data_offset": 2048, 00:10:38.289 "data_size": 63488 00:10:38.289 }, 00:10:38.289 { 00:10:38.289 "name": null, 00:10:38.289 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:38.289 "is_configured": false, 00:10:38.289 "data_offset": 2048, 00:10:38.289 "data_size": 63488 00:10:38.289 } 00:10:38.289 ] 00:10:38.289 }' 00:10:38.289 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.289 14:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.549 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:38.549 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:38.549 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:38.549 14:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.549 14:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.549 [2024-12-09 14:43:16.610116] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:38.549 [2024-12-09 14:43:16.610234] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.549 [2024-12-09 14:43:16.610266] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:38.549 [2024-12-09 14:43:16.610280] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.549 [2024-12-09 14:43:16.610925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.549 [2024-12-09 14:43:16.610965] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:38.549 [2024-12-09 14:43:16.611084] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:38.549 [2024-12-09 14:43:16.611112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:38.549 pt2 00:10:38.549 14:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.549 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:38.549 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:38.549 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:38.549 14:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.549 14:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.549 [2024-12-09 14:43:16.618005] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:38.549 [2024-12-09 14:43:16.618068] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.549 [2024-12-09 14:43:16.618101] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:38.549 [2024-12-09 14:43:16.618112] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.549 [2024-12-09 14:43:16.618562] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.549 [2024-12-09 14:43:16.618606] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:38.549 [2024-12-09 14:43:16.618686] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:38.549 [2024-12-09 14:43:16.618718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:38.549 pt3 00:10:38.549 14:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.549 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:38.549 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:38.549 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:38.549 14:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.549 14:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.549 [2024-12-09 14:43:16.625956] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:38.549 [2024-12-09 14:43:16.626013] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.549 [2024-12-09 14:43:16.626034] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:38.549 [2024-12-09 14:43:16.626045] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.549 [2024-12-09 14:43:16.626495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.549 [2024-12-09 14:43:16.626529] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:38.549 [2024-12-09 14:43:16.626622] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:38.549 [2024-12-09 14:43:16.626651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:38.549 [2024-12-09 14:43:16.626833] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:38.549 [2024-12-09 14:43:16.626846] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:38.549 [2024-12-09 14:43:16.627138] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:38.549 [2024-12-09 14:43:16.627334] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:38.549 [2024-12-09 14:43:16.627351] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:38.549 [2024-12-09 14:43:16.627533] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:38.549 pt4 00:10:38.549 14:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.549 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:38.549 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:38.549 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:38.549 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:38.549 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:38.549 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.549 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.549 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.549 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.549 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.549 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.549 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.549 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.549 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:38.549 14:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.549 14:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.549 14:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.809 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.809 "name": "raid_bdev1", 00:10:38.809 "uuid": "0db5650d-edc5-4c5e-b38c-889696c9916e", 00:10:38.809 "strip_size_kb": 64, 00:10:38.809 "state": "online", 00:10:38.809 "raid_level": "raid0", 00:10:38.809 "superblock": true, 00:10:38.809 "num_base_bdevs": 4, 00:10:38.809 "num_base_bdevs_discovered": 4, 00:10:38.809 "num_base_bdevs_operational": 4, 00:10:38.809 "base_bdevs_list": [ 00:10:38.809 { 00:10:38.809 "name": "pt1", 00:10:38.809 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:38.809 "is_configured": true, 00:10:38.809 "data_offset": 2048, 00:10:38.809 "data_size": 63488 00:10:38.809 }, 00:10:38.809 { 00:10:38.809 "name": "pt2", 00:10:38.809 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:38.809 "is_configured": true, 00:10:38.809 "data_offset": 2048, 00:10:38.809 "data_size": 63488 00:10:38.809 }, 00:10:38.809 { 00:10:38.809 "name": "pt3", 00:10:38.809 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:38.809 "is_configured": true, 00:10:38.809 "data_offset": 2048, 00:10:38.809 "data_size": 63488 00:10:38.809 }, 00:10:38.809 { 00:10:38.809 "name": "pt4", 00:10:38.809 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:38.809 "is_configured": true, 00:10:38.809 "data_offset": 2048, 00:10:38.809 "data_size": 63488 00:10:38.809 } 00:10:38.809 ] 00:10:38.809 }' 00:10:38.809 14:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.809 14:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.070 14:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:39.070 14:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:39.070 14:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:39.070 14:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:39.070 14:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:39.070 14:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:39.070 14:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:39.070 14:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:39.070 14:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.070 14:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.070 [2024-12-09 14:43:17.085720] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:39.070 14:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.070 14:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:39.070 "name": "raid_bdev1", 00:10:39.070 "aliases": [ 00:10:39.070 "0db5650d-edc5-4c5e-b38c-889696c9916e" 00:10:39.070 ], 00:10:39.070 "product_name": "Raid Volume", 00:10:39.070 "block_size": 512, 00:10:39.070 "num_blocks": 253952, 00:10:39.070 "uuid": "0db5650d-edc5-4c5e-b38c-889696c9916e", 00:10:39.070 "assigned_rate_limits": { 00:10:39.070 "rw_ios_per_sec": 0, 00:10:39.070 "rw_mbytes_per_sec": 0, 00:10:39.070 "r_mbytes_per_sec": 0, 00:10:39.070 "w_mbytes_per_sec": 0 00:10:39.070 }, 00:10:39.070 "claimed": false, 00:10:39.070 "zoned": false, 00:10:39.070 "supported_io_types": { 00:10:39.070 "read": true, 00:10:39.070 "write": true, 00:10:39.070 "unmap": true, 00:10:39.070 "flush": true, 00:10:39.070 "reset": true, 00:10:39.070 "nvme_admin": false, 00:10:39.070 "nvme_io": false, 00:10:39.070 "nvme_io_md": false, 00:10:39.070 "write_zeroes": true, 00:10:39.070 "zcopy": false, 00:10:39.070 "get_zone_info": false, 00:10:39.070 "zone_management": false, 00:10:39.070 "zone_append": false, 00:10:39.070 "compare": false, 00:10:39.070 "compare_and_write": false, 00:10:39.070 "abort": false, 00:10:39.070 "seek_hole": false, 00:10:39.070 "seek_data": false, 00:10:39.070 "copy": false, 00:10:39.070 "nvme_iov_md": false 00:10:39.070 }, 00:10:39.070 "memory_domains": [ 00:10:39.070 { 00:10:39.070 "dma_device_id": "system", 00:10:39.070 "dma_device_type": 1 00:10:39.070 }, 00:10:39.070 { 00:10:39.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.070 "dma_device_type": 2 00:10:39.070 }, 00:10:39.070 { 00:10:39.070 "dma_device_id": "system", 00:10:39.070 "dma_device_type": 1 00:10:39.070 }, 00:10:39.070 { 00:10:39.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.070 "dma_device_type": 2 00:10:39.070 }, 00:10:39.070 { 00:10:39.070 "dma_device_id": "system", 00:10:39.070 "dma_device_type": 1 00:10:39.070 }, 00:10:39.070 { 00:10:39.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.070 "dma_device_type": 2 00:10:39.070 }, 00:10:39.070 { 00:10:39.070 "dma_device_id": "system", 00:10:39.070 "dma_device_type": 1 00:10:39.070 }, 00:10:39.070 { 00:10:39.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.070 "dma_device_type": 2 00:10:39.070 } 00:10:39.070 ], 00:10:39.070 "driver_specific": { 00:10:39.070 "raid": { 00:10:39.070 "uuid": "0db5650d-edc5-4c5e-b38c-889696c9916e", 00:10:39.070 "strip_size_kb": 64, 00:10:39.070 "state": "online", 00:10:39.070 "raid_level": "raid0", 00:10:39.070 "superblock": true, 00:10:39.070 "num_base_bdevs": 4, 00:10:39.070 "num_base_bdevs_discovered": 4, 00:10:39.070 "num_base_bdevs_operational": 4, 00:10:39.070 "base_bdevs_list": [ 00:10:39.070 { 00:10:39.070 "name": "pt1", 00:10:39.070 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:39.070 "is_configured": true, 00:10:39.070 "data_offset": 2048, 00:10:39.070 "data_size": 63488 00:10:39.070 }, 00:10:39.070 { 00:10:39.070 "name": "pt2", 00:10:39.070 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:39.070 "is_configured": true, 00:10:39.070 "data_offset": 2048, 00:10:39.070 "data_size": 63488 00:10:39.070 }, 00:10:39.070 { 00:10:39.070 "name": "pt3", 00:10:39.070 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:39.070 "is_configured": true, 00:10:39.070 "data_offset": 2048, 00:10:39.070 "data_size": 63488 00:10:39.070 }, 00:10:39.070 { 00:10:39.070 "name": "pt4", 00:10:39.070 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:39.070 "is_configured": true, 00:10:39.070 "data_offset": 2048, 00:10:39.070 "data_size": 63488 00:10:39.070 } 00:10:39.070 ] 00:10:39.070 } 00:10:39.070 } 00:10:39.070 }' 00:10:39.070 14:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:39.070 14:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:39.070 pt2 00:10:39.070 pt3 00:10:39.070 pt4' 00:10:39.070 14:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:39.330 [2024-12-09 14:43:17.393130] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0db5650d-edc5-4c5e-b38c-889696c9916e '!=' 0db5650d-edc5-4c5e-b38c-889696c9916e ']' 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72005 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72005 ']' 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72005 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:39.330 14:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72005 00:10:39.590 14:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:39.590 14:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:39.590 14:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72005' 00:10:39.590 killing process with pid 72005 00:10:39.590 14:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72005 00:10:39.590 [2024-12-09 14:43:17.471664] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:39.590 14:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72005 00:10:39.590 [2024-12-09 14:43:17.471924] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:39.590 [2024-12-09 14:43:17.472059] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:39.590 [2024-12-09 14:43:17.472115] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:39.851 [2024-12-09 14:43:17.928477] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:41.233 14:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:41.233 00:10:41.233 real 0m5.852s 00:10:41.233 user 0m8.079s 00:10:41.233 sys 0m1.125s 00:10:41.233 14:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.233 14:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.233 ************************************ 00:10:41.233 END TEST raid_superblock_test 00:10:41.233 ************************************ 00:10:41.233 14:43:19 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:10:41.233 14:43:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:41.233 14:43:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.233 14:43:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:41.233 ************************************ 00:10:41.233 START TEST raid_read_error_test 00:10:41.233 ************************************ 00:10:41.233 14:43:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:10:41.233 14:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:41.233 14:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:41.233 14:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:41.233 14:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:41.233 14:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:41.233 14:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:41.233 14:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:41.233 14:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:41.233 14:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:41.233 14:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:41.233 14:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:41.233 14:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:41.233 14:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:41.233 14:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:41.233 14:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:41.233 14:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:41.233 14:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:41.233 14:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:41.233 14:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:41.233 14:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:41.233 14:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:41.233 14:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:41.233 14:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:41.233 14:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:41.234 14:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:41.234 14:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:41.493 14:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:41.493 14:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:41.493 14:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.0HrxM0O6XI 00:10:41.493 14:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72278 00:10:41.493 14:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:41.493 14:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72278 00:10:41.493 14:43:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72278 ']' 00:10:41.493 14:43:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.493 14:43:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:41.493 14:43:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.493 14:43:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:41.493 14:43:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.493 [2024-12-09 14:43:19.454109] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:10:41.493 [2024-12-09 14:43:19.454270] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72278 ] 00:10:41.493 [2024-12-09 14:43:19.611599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.753 [2024-12-09 14:43:19.765141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.012 [2024-12-09 14:43:20.020043] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:42.012 [2024-12-09 14:43:20.020142] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:42.272 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:42.272 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:42.272 14:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:42.272 14:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:42.272 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.272 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.272 BaseBdev1_malloc 00:10:42.272 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.272 14:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:42.272 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.272 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.272 true 00:10:42.272 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.272 14:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:42.272 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.272 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.272 [2024-12-09 14:43:20.378620] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:42.272 [2024-12-09 14:43:20.378793] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:42.272 [2024-12-09 14:43:20.378831] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:42.272 [2024-12-09 14:43:20.378847] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:42.272 [2024-12-09 14:43:20.381609] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:42.272 [2024-12-09 14:43:20.381658] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:42.272 BaseBdev1 00:10:42.272 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.272 14:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:42.272 14:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:42.272 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.272 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.533 BaseBdev2_malloc 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.533 true 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.533 [2024-12-09 14:43:20.455887] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:42.533 [2024-12-09 14:43:20.456086] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:42.533 [2024-12-09 14:43:20.456116] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:42.533 [2024-12-09 14:43:20.456133] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:42.533 [2024-12-09 14:43:20.458919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:42.533 [2024-12-09 14:43:20.458972] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:42.533 BaseBdev2 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.533 BaseBdev3_malloc 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.533 true 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.533 [2024-12-09 14:43:20.543092] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:42.533 [2024-12-09 14:43:20.543168] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:42.533 [2024-12-09 14:43:20.543189] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:42.533 [2024-12-09 14:43:20.543204] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:42.533 [2024-12-09 14:43:20.545753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:42.533 [2024-12-09 14:43:20.545884] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:42.533 BaseBdev3 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.533 BaseBdev4_malloc 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.533 true 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.533 [2024-12-09 14:43:20.618876] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:42.533 [2024-12-09 14:43:20.618950] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:42.533 [2024-12-09 14:43:20.618973] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:42.533 [2024-12-09 14:43:20.618988] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:42.533 [2024-12-09 14:43:20.621525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:42.533 [2024-12-09 14:43:20.621591] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:42.533 BaseBdev4 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.533 [2024-12-09 14:43:20.630920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:42.533 [2024-12-09 14:43:20.633147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:42.533 [2024-12-09 14:43:20.633236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:42.533 [2024-12-09 14:43:20.633310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:42.533 [2024-12-09 14:43:20.633561] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:42.533 [2024-12-09 14:43:20.633603] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:42.533 [2024-12-09 14:43:20.633875] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:42.533 [2024-12-09 14:43:20.634079] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:42.533 [2024-12-09 14:43:20.634093] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:42.533 [2024-12-09 14:43:20.634283] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.533 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.807 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.807 14:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.807 "name": "raid_bdev1", 00:10:42.807 "uuid": "e0bd553c-3545-482c-9537-ac0adaeef810", 00:10:42.807 "strip_size_kb": 64, 00:10:42.807 "state": "online", 00:10:42.807 "raid_level": "raid0", 00:10:42.807 "superblock": true, 00:10:42.807 "num_base_bdevs": 4, 00:10:42.807 "num_base_bdevs_discovered": 4, 00:10:42.807 "num_base_bdevs_operational": 4, 00:10:42.807 "base_bdevs_list": [ 00:10:42.807 { 00:10:42.807 "name": "BaseBdev1", 00:10:42.807 "uuid": "617f8b06-24d9-5b92-afff-214d6e2e9ebb", 00:10:42.807 "is_configured": true, 00:10:42.807 "data_offset": 2048, 00:10:42.807 "data_size": 63488 00:10:42.807 }, 00:10:42.807 { 00:10:42.807 "name": "BaseBdev2", 00:10:42.807 "uuid": "52674cc0-cb8b-5620-a6a7-f273e30853bc", 00:10:42.807 "is_configured": true, 00:10:42.807 "data_offset": 2048, 00:10:42.807 "data_size": 63488 00:10:42.807 }, 00:10:42.807 { 00:10:42.807 "name": "BaseBdev3", 00:10:42.807 "uuid": "3bffea8a-7f13-57a0-91ef-33e6db543ddc", 00:10:42.807 "is_configured": true, 00:10:42.807 "data_offset": 2048, 00:10:42.807 "data_size": 63488 00:10:42.807 }, 00:10:42.807 { 00:10:42.807 "name": "BaseBdev4", 00:10:42.807 "uuid": "19ec2276-7a94-51ea-aa2b-be738ff25aee", 00:10:42.807 "is_configured": true, 00:10:42.807 "data_offset": 2048, 00:10:42.807 "data_size": 63488 00:10:42.807 } 00:10:42.807 ] 00:10:42.807 }' 00:10:42.807 14:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.807 14:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.082 14:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:43.082 14:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:43.082 [2024-12-09 14:43:21.151692] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:44.020 14:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:44.020 14:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.020 14:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.020 14:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.020 14:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:44.020 14:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:44.020 14:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:44.020 14:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:44.020 14:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:44.020 14:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:44.020 14:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.020 14:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.020 14:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.020 14:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.020 14:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.020 14:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.020 14:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.020 14:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.020 14:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.020 14:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:44.020 14:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.020 14:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.020 14:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.020 "name": "raid_bdev1", 00:10:44.020 "uuid": "e0bd553c-3545-482c-9537-ac0adaeef810", 00:10:44.020 "strip_size_kb": 64, 00:10:44.020 "state": "online", 00:10:44.020 "raid_level": "raid0", 00:10:44.020 "superblock": true, 00:10:44.020 "num_base_bdevs": 4, 00:10:44.020 "num_base_bdevs_discovered": 4, 00:10:44.020 "num_base_bdevs_operational": 4, 00:10:44.020 "base_bdevs_list": [ 00:10:44.020 { 00:10:44.020 "name": "BaseBdev1", 00:10:44.020 "uuid": "617f8b06-24d9-5b92-afff-214d6e2e9ebb", 00:10:44.020 "is_configured": true, 00:10:44.020 "data_offset": 2048, 00:10:44.020 "data_size": 63488 00:10:44.020 }, 00:10:44.020 { 00:10:44.020 "name": "BaseBdev2", 00:10:44.020 "uuid": "52674cc0-cb8b-5620-a6a7-f273e30853bc", 00:10:44.020 "is_configured": true, 00:10:44.020 "data_offset": 2048, 00:10:44.020 "data_size": 63488 00:10:44.020 }, 00:10:44.020 { 00:10:44.020 "name": "BaseBdev3", 00:10:44.020 "uuid": "3bffea8a-7f13-57a0-91ef-33e6db543ddc", 00:10:44.020 "is_configured": true, 00:10:44.020 "data_offset": 2048, 00:10:44.020 "data_size": 63488 00:10:44.020 }, 00:10:44.020 { 00:10:44.020 "name": "BaseBdev4", 00:10:44.020 "uuid": "19ec2276-7a94-51ea-aa2b-be738ff25aee", 00:10:44.020 "is_configured": true, 00:10:44.020 "data_offset": 2048, 00:10:44.020 "data_size": 63488 00:10:44.020 } 00:10:44.020 ] 00:10:44.020 }' 00:10:44.020 14:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.020 14:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.588 14:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:44.588 14:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.588 14:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.589 [2024-12-09 14:43:22.521690] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:44.589 [2024-12-09 14:43:22.521749] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:44.589 [2024-12-09 14:43:22.524777] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:44.589 [2024-12-09 14:43:22.524857] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:44.589 [2024-12-09 14:43:22.524911] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:44.589 [2024-12-09 14:43:22.524927] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:44.589 { 00:10:44.589 "results": [ 00:10:44.589 { 00:10:44.589 "job": "raid_bdev1", 00:10:44.589 "core_mask": "0x1", 00:10:44.589 "workload": "randrw", 00:10:44.589 "percentage": 50, 00:10:44.589 "status": "finished", 00:10:44.589 "queue_depth": 1, 00:10:44.589 "io_size": 131072, 00:10:44.589 "runtime": 1.370384, 00:10:44.589 "iops": 11747.802075914487, 00:10:44.589 "mibps": 1468.475259489311, 00:10:44.589 "io_failed": 1, 00:10:44.589 "io_timeout": 0, 00:10:44.589 "avg_latency_us": 119.37996083430524, 00:10:44.589 "min_latency_us": 29.289082969432314, 00:10:44.589 "max_latency_us": 1724.2550218340612 00:10:44.589 } 00:10:44.589 ], 00:10:44.589 "core_count": 1 00:10:44.589 } 00:10:44.589 14:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.589 14:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72278 00:10:44.589 14:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72278 ']' 00:10:44.589 14:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72278 00:10:44.589 14:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:44.589 14:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:44.589 14:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72278 00:10:44.589 14:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:44.589 14:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:44.589 14:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72278' 00:10:44.589 killing process with pid 72278 00:10:44.589 14:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72278 00:10:44.589 [2024-12-09 14:43:22.560282] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:44.589 14:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72278 00:10:44.848 [2024-12-09 14:43:22.940445] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:46.228 14:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:46.228 14:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.0HrxM0O6XI 00:10:46.228 14:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:46.228 14:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:46.228 14:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:46.228 ************************************ 00:10:46.228 END TEST raid_read_error_test 00:10:46.228 ************************************ 00:10:46.228 14:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:46.228 14:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:46.228 14:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:46.228 00:10:46.228 real 0m4.997s 00:10:46.228 user 0m5.663s 00:10:46.228 sys 0m0.728s 00:10:46.228 14:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.228 14:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.487 14:43:24 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:10:46.487 14:43:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:46.487 14:43:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.487 14:43:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:46.487 ************************************ 00:10:46.487 START TEST raid_write_error_test 00:10:46.487 ************************************ 00:10:46.487 14:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:10:46.487 14:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:46.487 14:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:46.487 14:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:46.487 14:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:46.487 14:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:46.487 14:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:46.487 14:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:46.487 14:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:46.487 14:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:46.487 14:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:46.487 14:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:46.487 14:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:46.487 14:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:46.487 14:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:46.487 14:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:46.487 14:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:46.487 14:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:46.487 14:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:46.487 14:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:46.487 14:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:46.487 14:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:46.487 14:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:46.487 14:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:46.487 14:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:46.487 14:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:46.487 14:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:46.487 14:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:46.487 14:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:46.487 14:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Sdb1P5YcO2 00:10:46.487 14:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72425 00:10:46.487 14:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72425 00:10:46.487 14:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 72425 ']' 00:10:46.487 14:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.487 14:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:46.487 14:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:46.487 14:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.487 14:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:46.487 14:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.487 [2024-12-09 14:43:24.505123] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:10:46.487 [2024-12-09 14:43:24.505324] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72425 ] 00:10:46.746 [2024-12-09 14:43:24.683684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.746 [2024-12-09 14:43:24.833321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.005 [2024-12-09 14:43:25.090026] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:47.005 [2024-12-09 14:43:25.090254] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:47.263 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:47.263 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:47.263 14:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:47.263 14:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:47.263 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.263 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.519 BaseBdev1_malloc 00:10:47.519 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.519 14:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:47.519 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.519 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.519 true 00:10:47.519 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.519 14:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:47.519 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.519 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.519 [2024-12-09 14:43:25.418494] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:47.519 [2024-12-09 14:43:25.418602] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.519 [2024-12-09 14:43:25.418635] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:47.519 [2024-12-09 14:43:25.418654] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.519 [2024-12-09 14:43:25.421546] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.519 [2024-12-09 14:43:25.421627] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:47.519 BaseBdev1 00:10:47.519 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.519 14:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:47.519 14:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:47.519 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.519 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.519 BaseBdev2_malloc 00:10:47.519 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.519 14:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:47.519 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.519 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.519 true 00:10:47.519 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.519 14:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:47.519 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.519 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.519 [2024-12-09 14:43:25.492975] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:47.519 [2024-12-09 14:43:25.493063] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.519 [2024-12-09 14:43:25.493085] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:47.519 [2024-12-09 14:43:25.493099] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.519 [2024-12-09 14:43:25.495750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.519 [2024-12-09 14:43:25.495898] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:47.519 BaseBdev2 00:10:47.519 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.519 14:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:47.519 14:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:47.519 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.519 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.519 BaseBdev3_malloc 00:10:47.519 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.519 14:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:47.519 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.519 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.519 true 00:10:47.519 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.519 14:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:47.520 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.520 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.520 [2024-12-09 14:43:25.581504] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:47.520 [2024-12-09 14:43:25.581612] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.520 [2024-12-09 14:43:25.581641] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:47.520 [2024-12-09 14:43:25.581656] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.520 [2024-12-09 14:43:25.584317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.520 [2024-12-09 14:43:25.584371] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:47.520 BaseBdev3 00:10:47.520 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.520 14:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:47.520 14:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:47.520 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.520 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.778 BaseBdev4_malloc 00:10:47.778 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.778 14:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:47.778 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.778 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.778 true 00:10:47.778 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.778 14:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:47.778 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.778 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.778 [2024-12-09 14:43:25.663233] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:47.778 [2024-12-09 14:43:25.663328] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.778 [2024-12-09 14:43:25.663358] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:47.778 [2024-12-09 14:43:25.663375] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.778 [2024-12-09 14:43:25.666213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.778 [2024-12-09 14:43:25.666268] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:47.778 BaseBdev4 00:10:47.778 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.778 14:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:47.778 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.778 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.778 [2024-12-09 14:43:25.675296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:47.778 [2024-12-09 14:43:25.677823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:47.778 [2024-12-09 14:43:25.677913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:47.778 [2024-12-09 14:43:25.677986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:47.778 [2024-12-09 14:43:25.678251] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:47.778 [2024-12-09 14:43:25.678273] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:47.778 [2024-12-09 14:43:25.678652] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:47.778 [2024-12-09 14:43:25.678910] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:47.778 [2024-12-09 14:43:25.678971] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:47.778 [2024-12-09 14:43:25.679298] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:47.778 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.778 14:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:47.778 14:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:47.778 14:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:47.778 14:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:47.778 14:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.778 14:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.778 14:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.778 14:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.778 14:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.778 14:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.778 14:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:47.778 14:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.778 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.778 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.778 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.778 14:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.778 "name": "raid_bdev1", 00:10:47.778 "uuid": "c86172ff-e8aa-4615-838c-eb0af7f3c892", 00:10:47.778 "strip_size_kb": 64, 00:10:47.778 "state": "online", 00:10:47.778 "raid_level": "raid0", 00:10:47.778 "superblock": true, 00:10:47.778 "num_base_bdevs": 4, 00:10:47.778 "num_base_bdevs_discovered": 4, 00:10:47.778 "num_base_bdevs_operational": 4, 00:10:47.778 "base_bdevs_list": [ 00:10:47.778 { 00:10:47.778 "name": "BaseBdev1", 00:10:47.778 "uuid": "da3d20d8-6770-5458-be3c-4c541a89127b", 00:10:47.778 "is_configured": true, 00:10:47.778 "data_offset": 2048, 00:10:47.778 "data_size": 63488 00:10:47.778 }, 00:10:47.778 { 00:10:47.778 "name": "BaseBdev2", 00:10:47.778 "uuid": "e5d45fa2-bf1f-5b31-85ff-85dfb77bd600", 00:10:47.778 "is_configured": true, 00:10:47.778 "data_offset": 2048, 00:10:47.778 "data_size": 63488 00:10:47.778 }, 00:10:47.778 { 00:10:47.778 "name": "BaseBdev3", 00:10:47.778 "uuid": "eced7774-ebd5-5ac6-a152-96f44dc691bc", 00:10:47.778 "is_configured": true, 00:10:47.778 "data_offset": 2048, 00:10:47.778 "data_size": 63488 00:10:47.778 }, 00:10:47.778 { 00:10:47.778 "name": "BaseBdev4", 00:10:47.778 "uuid": "3bd428f9-c03a-5f34-b0aa-948331ca95b6", 00:10:47.778 "is_configured": true, 00:10:47.778 "data_offset": 2048, 00:10:47.778 "data_size": 63488 00:10:47.778 } 00:10:47.778 ] 00:10:47.778 }' 00:10:47.778 14:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.778 14:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.361 14:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:48.361 14:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:48.362 [2024-12-09 14:43:26.268127] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:49.300 14:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:49.300 14:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.300 14:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.300 14:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.301 14:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:49.301 14:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:49.301 14:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:49.301 14:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:49.301 14:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:49.301 14:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:49.301 14:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.301 14:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.301 14:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.301 14:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.301 14:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.301 14:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.301 14:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.301 14:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.301 14:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:49.301 14:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.301 14:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.301 14:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.301 14:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.301 "name": "raid_bdev1", 00:10:49.301 "uuid": "c86172ff-e8aa-4615-838c-eb0af7f3c892", 00:10:49.301 "strip_size_kb": 64, 00:10:49.301 "state": "online", 00:10:49.301 "raid_level": "raid0", 00:10:49.301 "superblock": true, 00:10:49.301 "num_base_bdevs": 4, 00:10:49.301 "num_base_bdevs_discovered": 4, 00:10:49.301 "num_base_bdevs_operational": 4, 00:10:49.301 "base_bdevs_list": [ 00:10:49.301 { 00:10:49.301 "name": "BaseBdev1", 00:10:49.301 "uuid": "da3d20d8-6770-5458-be3c-4c541a89127b", 00:10:49.301 "is_configured": true, 00:10:49.301 "data_offset": 2048, 00:10:49.301 "data_size": 63488 00:10:49.301 }, 00:10:49.301 { 00:10:49.301 "name": "BaseBdev2", 00:10:49.301 "uuid": "e5d45fa2-bf1f-5b31-85ff-85dfb77bd600", 00:10:49.301 "is_configured": true, 00:10:49.301 "data_offset": 2048, 00:10:49.301 "data_size": 63488 00:10:49.301 }, 00:10:49.301 { 00:10:49.301 "name": "BaseBdev3", 00:10:49.301 "uuid": "eced7774-ebd5-5ac6-a152-96f44dc691bc", 00:10:49.301 "is_configured": true, 00:10:49.301 "data_offset": 2048, 00:10:49.301 "data_size": 63488 00:10:49.301 }, 00:10:49.301 { 00:10:49.301 "name": "BaseBdev4", 00:10:49.301 "uuid": "3bd428f9-c03a-5f34-b0aa-948331ca95b6", 00:10:49.301 "is_configured": true, 00:10:49.301 "data_offset": 2048, 00:10:49.301 "data_size": 63488 00:10:49.301 } 00:10:49.301 ] 00:10:49.301 }' 00:10:49.301 14:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.301 14:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.562 14:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:49.562 14:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.562 14:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.562 [2024-12-09 14:43:27.643366] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:49.562 [2024-12-09 14:43:27.643541] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:49.562 [2024-12-09 14:43:27.646554] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:49.562 [2024-12-09 14:43:27.646692] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:49.562 [2024-12-09 14:43:27.646776] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:49.562 [2024-12-09 14:43:27.646852] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:49.562 { 00:10:49.562 "results": [ 00:10:49.562 { 00:10:49.562 "job": "raid_bdev1", 00:10:49.562 "core_mask": "0x1", 00:10:49.562 "workload": "randrw", 00:10:49.562 "percentage": 50, 00:10:49.562 "status": "finished", 00:10:49.562 "queue_depth": 1, 00:10:49.562 "io_size": 131072, 00:10:49.562 "runtime": 1.375461, 00:10:49.562 "iops": 11578.663444474252, 00:10:49.562 "mibps": 1447.3329305592815, 00:10:49.562 "io_failed": 1, 00:10:49.562 "io_timeout": 0, 00:10:49.562 "avg_latency_us": 121.28154267162708, 00:10:49.562 "min_latency_us": 31.07772925764192, 00:10:49.562 "max_latency_us": 1695.6366812227075 00:10:49.562 } 00:10:49.562 ], 00:10:49.562 "core_count": 1 00:10:49.562 } 00:10:49.562 14:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.562 14:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72425 00:10:49.562 14:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 72425 ']' 00:10:49.562 14:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 72425 00:10:49.562 14:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:49.562 14:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:49.562 14:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72425 00:10:49.822 killing process with pid 72425 00:10:49.822 14:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:49.822 14:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:49.822 14:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72425' 00:10:49.822 14:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 72425 00:10:49.822 [2024-12-09 14:43:27.689189] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:49.822 14:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 72425 00:10:50.081 [2024-12-09 14:43:28.073379] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:51.461 14:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Sdb1P5YcO2 00:10:51.461 14:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:51.461 14:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:51.461 14:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:51.461 14:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:51.461 14:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:51.461 14:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:51.461 14:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:51.461 00:10:51.461 real 0m5.055s 00:10:51.461 user 0m5.820s 00:10:51.461 sys 0m0.714s 00:10:51.461 14:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:51.461 ************************************ 00:10:51.461 END TEST raid_write_error_test 00:10:51.461 ************************************ 00:10:51.461 14:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.461 14:43:29 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:51.461 14:43:29 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:10:51.461 14:43:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:51.461 14:43:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.461 14:43:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:51.461 ************************************ 00:10:51.461 START TEST raid_state_function_test 00:10:51.461 ************************************ 00:10:51.461 14:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:10:51.461 14:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:51.461 14:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:51.461 14:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:51.461 14:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:51.461 14:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:51.461 14:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:51.461 14:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:51.461 14:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:51.461 14:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:51.461 14:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:51.461 14:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:51.462 14:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:51.462 14:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:51.462 14:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:51.462 14:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:51.462 14:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:51.462 14:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:51.462 14:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:51.462 14:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:51.462 14:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:51.462 14:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:51.462 14:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:51.462 14:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:51.462 14:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:51.462 14:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:51.462 14:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:51.462 14:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:51.462 14:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:51.462 14:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:51.462 14:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72573 00:10:51.462 Process raid pid: 72573 00:10:51.462 14:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:51.462 14:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72573' 00:10:51.462 14:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72573 00:10:51.462 14:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 72573 ']' 00:10:51.462 14:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.462 14:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:51.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.462 14:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.462 14:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:51.462 14:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.722 [2024-12-09 14:43:29.625542] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:10:51.722 [2024-12-09 14:43:29.625690] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.722 [2024-12-09 14:43:29.805065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.981 [2024-12-09 14:43:29.951366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.240 [2024-12-09 14:43:30.213675] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:52.240 [2024-12-09 14:43:30.213846] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:52.500 14:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:52.500 14:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:52.500 14:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:52.500 14:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.500 14:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.500 [2024-12-09 14:43:30.488483] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:52.500 [2024-12-09 14:43:30.488611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:52.500 [2024-12-09 14:43:30.488627] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:52.500 [2024-12-09 14:43:30.488642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:52.500 [2024-12-09 14:43:30.488652] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:52.500 [2024-12-09 14:43:30.488665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:52.500 [2024-12-09 14:43:30.488675] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:52.500 [2024-12-09 14:43:30.488688] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:52.500 14:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.500 14:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:52.500 14:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.500 14:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.500 14:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:52.500 14:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.500 14:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.500 14:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.500 14:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.500 14:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.500 14:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.500 14:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.500 14:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.500 14:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.500 14:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.500 14:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.500 14:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.500 "name": "Existed_Raid", 00:10:52.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.500 "strip_size_kb": 64, 00:10:52.500 "state": "configuring", 00:10:52.500 "raid_level": "concat", 00:10:52.500 "superblock": false, 00:10:52.500 "num_base_bdevs": 4, 00:10:52.500 "num_base_bdevs_discovered": 0, 00:10:52.500 "num_base_bdevs_operational": 4, 00:10:52.500 "base_bdevs_list": [ 00:10:52.500 { 00:10:52.500 "name": "BaseBdev1", 00:10:52.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.500 "is_configured": false, 00:10:52.500 "data_offset": 0, 00:10:52.500 "data_size": 0 00:10:52.500 }, 00:10:52.500 { 00:10:52.500 "name": "BaseBdev2", 00:10:52.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.500 "is_configured": false, 00:10:52.500 "data_offset": 0, 00:10:52.500 "data_size": 0 00:10:52.500 }, 00:10:52.500 { 00:10:52.500 "name": "BaseBdev3", 00:10:52.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.500 "is_configured": false, 00:10:52.500 "data_offset": 0, 00:10:52.500 "data_size": 0 00:10:52.500 }, 00:10:52.500 { 00:10:52.500 "name": "BaseBdev4", 00:10:52.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.500 "is_configured": false, 00:10:52.500 "data_offset": 0, 00:10:52.500 "data_size": 0 00:10:52.500 } 00:10:52.500 ] 00:10:52.500 }' 00:10:52.500 14:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.500 14:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.070 14:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:53.070 14:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.070 14:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.070 [2024-12-09 14:43:30.991681] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:53.070 [2024-12-09 14:43:30.991868] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:53.070 14:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.070 14:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:53.070 14:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.070 14:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.070 [2024-12-09 14:43:30.999607] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:53.070 [2024-12-09 14:43:30.999720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:53.070 [2024-12-09 14:43:30.999767] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:53.070 [2024-12-09 14:43:30.999808] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:53.070 [2024-12-09 14:43:30.999841] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:53.070 [2024-12-09 14:43:30.999886] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:53.071 [2024-12-09 14:43:30.999920] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:53.071 [2024-12-09 14:43:30.999962] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:53.071 14:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.071 14:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:53.071 14:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.071 14:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.071 [2024-12-09 14:43:31.053934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:53.071 BaseBdev1 00:10:53.071 14:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.071 14:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:53.071 14:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:53.071 14:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:53.071 14:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:53.071 14:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:53.071 14:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:53.071 14:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:53.071 14:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.071 14:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.071 14:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.071 14:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:53.071 14:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.071 14:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.071 [ 00:10:53.071 { 00:10:53.071 "name": "BaseBdev1", 00:10:53.071 "aliases": [ 00:10:53.071 "0475118f-ce56-4cf7-8db1-124015924ee3" 00:10:53.071 ], 00:10:53.071 "product_name": "Malloc disk", 00:10:53.071 "block_size": 512, 00:10:53.071 "num_blocks": 65536, 00:10:53.071 "uuid": "0475118f-ce56-4cf7-8db1-124015924ee3", 00:10:53.071 "assigned_rate_limits": { 00:10:53.071 "rw_ios_per_sec": 0, 00:10:53.071 "rw_mbytes_per_sec": 0, 00:10:53.071 "r_mbytes_per_sec": 0, 00:10:53.071 "w_mbytes_per_sec": 0 00:10:53.071 }, 00:10:53.071 "claimed": true, 00:10:53.071 "claim_type": "exclusive_write", 00:10:53.071 "zoned": false, 00:10:53.071 "supported_io_types": { 00:10:53.071 "read": true, 00:10:53.071 "write": true, 00:10:53.071 "unmap": true, 00:10:53.071 "flush": true, 00:10:53.071 "reset": true, 00:10:53.071 "nvme_admin": false, 00:10:53.071 "nvme_io": false, 00:10:53.071 "nvme_io_md": false, 00:10:53.071 "write_zeroes": true, 00:10:53.071 "zcopy": true, 00:10:53.071 "get_zone_info": false, 00:10:53.071 "zone_management": false, 00:10:53.071 "zone_append": false, 00:10:53.071 "compare": false, 00:10:53.071 "compare_and_write": false, 00:10:53.071 "abort": true, 00:10:53.071 "seek_hole": false, 00:10:53.071 "seek_data": false, 00:10:53.071 "copy": true, 00:10:53.071 "nvme_iov_md": false 00:10:53.071 }, 00:10:53.071 "memory_domains": [ 00:10:53.071 { 00:10:53.071 "dma_device_id": "system", 00:10:53.071 "dma_device_type": 1 00:10:53.071 }, 00:10:53.071 { 00:10:53.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.071 "dma_device_type": 2 00:10:53.071 } 00:10:53.071 ], 00:10:53.071 "driver_specific": {} 00:10:53.071 } 00:10:53.071 ] 00:10:53.071 14:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.071 14:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:53.071 14:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:53.071 14:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.071 14:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.071 14:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:53.071 14:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.071 14:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.071 14:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.071 14:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.071 14:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.071 14:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.071 14:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.071 14:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.071 14:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.071 14:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.071 14:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.071 14:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.071 "name": "Existed_Raid", 00:10:53.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.071 "strip_size_kb": 64, 00:10:53.071 "state": "configuring", 00:10:53.071 "raid_level": "concat", 00:10:53.071 "superblock": false, 00:10:53.071 "num_base_bdevs": 4, 00:10:53.071 "num_base_bdevs_discovered": 1, 00:10:53.071 "num_base_bdevs_operational": 4, 00:10:53.071 "base_bdevs_list": [ 00:10:53.071 { 00:10:53.071 "name": "BaseBdev1", 00:10:53.071 "uuid": "0475118f-ce56-4cf7-8db1-124015924ee3", 00:10:53.071 "is_configured": true, 00:10:53.071 "data_offset": 0, 00:10:53.071 "data_size": 65536 00:10:53.071 }, 00:10:53.071 { 00:10:53.071 "name": "BaseBdev2", 00:10:53.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.071 "is_configured": false, 00:10:53.071 "data_offset": 0, 00:10:53.071 "data_size": 0 00:10:53.071 }, 00:10:53.071 { 00:10:53.071 "name": "BaseBdev3", 00:10:53.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.071 "is_configured": false, 00:10:53.071 "data_offset": 0, 00:10:53.071 "data_size": 0 00:10:53.071 }, 00:10:53.071 { 00:10:53.071 "name": "BaseBdev4", 00:10:53.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.071 "is_configured": false, 00:10:53.071 "data_offset": 0, 00:10:53.071 "data_size": 0 00:10:53.071 } 00:10:53.071 ] 00:10:53.071 }' 00:10:53.071 14:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.071 14:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.641 14:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:53.641 14:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.641 14:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.641 [2024-12-09 14:43:31.545233] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:53.641 [2024-12-09 14:43:31.545454] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:53.641 14:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.641 14:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:53.641 14:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.641 14:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.641 [2024-12-09 14:43:31.557240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:53.641 [2024-12-09 14:43:31.559545] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:53.641 [2024-12-09 14:43:31.559625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:53.641 [2024-12-09 14:43:31.559641] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:53.641 [2024-12-09 14:43:31.559657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:53.641 [2024-12-09 14:43:31.559667] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:53.641 [2024-12-09 14:43:31.559680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:53.641 14:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.641 14:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:53.641 14:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:53.641 14:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:53.641 14:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.641 14:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.641 14:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:53.641 14:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.641 14:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.641 14:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.641 14:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.641 14:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.641 14:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.641 14:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.641 14:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.641 14:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.641 14:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.641 14:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.641 14:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.641 "name": "Existed_Raid", 00:10:53.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.641 "strip_size_kb": 64, 00:10:53.641 "state": "configuring", 00:10:53.641 "raid_level": "concat", 00:10:53.641 "superblock": false, 00:10:53.641 "num_base_bdevs": 4, 00:10:53.641 "num_base_bdevs_discovered": 1, 00:10:53.641 "num_base_bdevs_operational": 4, 00:10:53.641 "base_bdevs_list": [ 00:10:53.641 { 00:10:53.641 "name": "BaseBdev1", 00:10:53.641 "uuid": "0475118f-ce56-4cf7-8db1-124015924ee3", 00:10:53.641 "is_configured": true, 00:10:53.641 "data_offset": 0, 00:10:53.641 "data_size": 65536 00:10:53.641 }, 00:10:53.641 { 00:10:53.642 "name": "BaseBdev2", 00:10:53.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.642 "is_configured": false, 00:10:53.642 "data_offset": 0, 00:10:53.642 "data_size": 0 00:10:53.642 }, 00:10:53.642 { 00:10:53.642 "name": "BaseBdev3", 00:10:53.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.642 "is_configured": false, 00:10:53.642 "data_offset": 0, 00:10:53.642 "data_size": 0 00:10:53.642 }, 00:10:53.642 { 00:10:53.642 "name": "BaseBdev4", 00:10:53.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.642 "is_configured": false, 00:10:53.642 "data_offset": 0, 00:10:53.642 "data_size": 0 00:10:53.642 } 00:10:53.642 ] 00:10:53.642 }' 00:10:53.642 14:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.642 14:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.004 14:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:54.004 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.004 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.287 [2024-12-09 14:43:32.075936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:54.287 BaseBdev2 00:10:54.287 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.287 14:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:54.287 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:54.287 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:54.287 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:54.287 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:54.287 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:54.287 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:54.287 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.287 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.287 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.287 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:54.287 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.287 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.287 [ 00:10:54.287 { 00:10:54.287 "name": "BaseBdev2", 00:10:54.287 "aliases": [ 00:10:54.287 "4edb1e4c-8ef2-455f-911a-e3a58ddf1c6f" 00:10:54.287 ], 00:10:54.287 "product_name": "Malloc disk", 00:10:54.287 "block_size": 512, 00:10:54.287 "num_blocks": 65536, 00:10:54.287 "uuid": "4edb1e4c-8ef2-455f-911a-e3a58ddf1c6f", 00:10:54.287 "assigned_rate_limits": { 00:10:54.287 "rw_ios_per_sec": 0, 00:10:54.287 "rw_mbytes_per_sec": 0, 00:10:54.288 "r_mbytes_per_sec": 0, 00:10:54.288 "w_mbytes_per_sec": 0 00:10:54.288 }, 00:10:54.288 "claimed": true, 00:10:54.288 "claim_type": "exclusive_write", 00:10:54.288 "zoned": false, 00:10:54.288 "supported_io_types": { 00:10:54.288 "read": true, 00:10:54.288 "write": true, 00:10:54.288 "unmap": true, 00:10:54.288 "flush": true, 00:10:54.288 "reset": true, 00:10:54.288 "nvme_admin": false, 00:10:54.288 "nvme_io": false, 00:10:54.288 "nvme_io_md": false, 00:10:54.288 "write_zeroes": true, 00:10:54.288 "zcopy": true, 00:10:54.288 "get_zone_info": false, 00:10:54.288 "zone_management": false, 00:10:54.288 "zone_append": false, 00:10:54.288 "compare": false, 00:10:54.288 "compare_and_write": false, 00:10:54.288 "abort": true, 00:10:54.288 "seek_hole": false, 00:10:54.288 "seek_data": false, 00:10:54.288 "copy": true, 00:10:54.288 "nvme_iov_md": false 00:10:54.288 }, 00:10:54.288 "memory_domains": [ 00:10:54.288 { 00:10:54.288 "dma_device_id": "system", 00:10:54.288 "dma_device_type": 1 00:10:54.288 }, 00:10:54.288 { 00:10:54.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.288 "dma_device_type": 2 00:10:54.288 } 00:10:54.288 ], 00:10:54.288 "driver_specific": {} 00:10:54.288 } 00:10:54.288 ] 00:10:54.288 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.288 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:54.288 14:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:54.288 14:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:54.288 14:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:54.288 14:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.288 14:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.288 14:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.288 14:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.288 14:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.288 14:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.288 14:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.288 14:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.288 14:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.288 14:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.288 14:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.288 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.288 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.288 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.288 14:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.288 "name": "Existed_Raid", 00:10:54.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.288 "strip_size_kb": 64, 00:10:54.288 "state": "configuring", 00:10:54.288 "raid_level": "concat", 00:10:54.288 "superblock": false, 00:10:54.288 "num_base_bdevs": 4, 00:10:54.288 "num_base_bdevs_discovered": 2, 00:10:54.288 "num_base_bdevs_operational": 4, 00:10:54.288 "base_bdevs_list": [ 00:10:54.288 { 00:10:54.288 "name": "BaseBdev1", 00:10:54.288 "uuid": "0475118f-ce56-4cf7-8db1-124015924ee3", 00:10:54.288 "is_configured": true, 00:10:54.288 "data_offset": 0, 00:10:54.288 "data_size": 65536 00:10:54.288 }, 00:10:54.288 { 00:10:54.288 "name": "BaseBdev2", 00:10:54.288 "uuid": "4edb1e4c-8ef2-455f-911a-e3a58ddf1c6f", 00:10:54.288 "is_configured": true, 00:10:54.288 "data_offset": 0, 00:10:54.288 "data_size": 65536 00:10:54.288 }, 00:10:54.288 { 00:10:54.288 "name": "BaseBdev3", 00:10:54.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.288 "is_configured": false, 00:10:54.288 "data_offset": 0, 00:10:54.288 "data_size": 0 00:10:54.288 }, 00:10:54.288 { 00:10:54.288 "name": "BaseBdev4", 00:10:54.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.288 "is_configured": false, 00:10:54.288 "data_offset": 0, 00:10:54.288 "data_size": 0 00:10:54.288 } 00:10:54.288 ] 00:10:54.288 }' 00:10:54.288 14:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.288 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.548 14:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:54.548 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.548 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.548 [2024-12-09 14:43:32.630185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:54.548 BaseBdev3 00:10:54.548 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.548 14:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:54.548 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:54.548 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:54.548 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:54.548 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:54.548 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:54.548 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:54.548 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.548 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.548 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.548 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:54.548 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.548 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.548 [ 00:10:54.548 { 00:10:54.548 "name": "BaseBdev3", 00:10:54.548 "aliases": [ 00:10:54.548 "8a33aecf-38de-4344-9aec-8b25e0defd36" 00:10:54.548 ], 00:10:54.548 "product_name": "Malloc disk", 00:10:54.548 "block_size": 512, 00:10:54.548 "num_blocks": 65536, 00:10:54.548 "uuid": "8a33aecf-38de-4344-9aec-8b25e0defd36", 00:10:54.548 "assigned_rate_limits": { 00:10:54.548 "rw_ios_per_sec": 0, 00:10:54.548 "rw_mbytes_per_sec": 0, 00:10:54.548 "r_mbytes_per_sec": 0, 00:10:54.548 "w_mbytes_per_sec": 0 00:10:54.548 }, 00:10:54.548 "claimed": true, 00:10:54.548 "claim_type": "exclusive_write", 00:10:54.548 "zoned": false, 00:10:54.548 "supported_io_types": { 00:10:54.548 "read": true, 00:10:54.548 "write": true, 00:10:54.548 "unmap": true, 00:10:54.548 "flush": true, 00:10:54.548 "reset": true, 00:10:54.548 "nvme_admin": false, 00:10:54.548 "nvme_io": false, 00:10:54.548 "nvme_io_md": false, 00:10:54.548 "write_zeroes": true, 00:10:54.548 "zcopy": true, 00:10:54.548 "get_zone_info": false, 00:10:54.548 "zone_management": false, 00:10:54.548 "zone_append": false, 00:10:54.548 "compare": false, 00:10:54.548 "compare_and_write": false, 00:10:54.548 "abort": true, 00:10:54.548 "seek_hole": false, 00:10:54.548 "seek_data": false, 00:10:54.548 "copy": true, 00:10:54.548 "nvme_iov_md": false 00:10:54.548 }, 00:10:54.548 "memory_domains": [ 00:10:54.548 { 00:10:54.548 "dma_device_id": "system", 00:10:54.548 "dma_device_type": 1 00:10:54.548 }, 00:10:54.548 { 00:10:54.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.548 "dma_device_type": 2 00:10:54.548 } 00:10:54.548 ], 00:10:54.548 "driver_specific": {} 00:10:54.548 } 00:10:54.548 ] 00:10:54.548 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.808 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:54.808 14:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:54.808 14:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:54.808 14:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:54.808 14:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.808 14:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.808 14:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.808 14:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.808 14:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.808 14:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.808 14:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.808 14:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.808 14:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.808 14:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.808 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.808 14:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.808 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.808 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.808 14:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.808 "name": "Existed_Raid", 00:10:54.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.808 "strip_size_kb": 64, 00:10:54.808 "state": "configuring", 00:10:54.808 "raid_level": "concat", 00:10:54.808 "superblock": false, 00:10:54.808 "num_base_bdevs": 4, 00:10:54.808 "num_base_bdevs_discovered": 3, 00:10:54.808 "num_base_bdevs_operational": 4, 00:10:54.808 "base_bdevs_list": [ 00:10:54.808 { 00:10:54.808 "name": "BaseBdev1", 00:10:54.808 "uuid": "0475118f-ce56-4cf7-8db1-124015924ee3", 00:10:54.808 "is_configured": true, 00:10:54.808 "data_offset": 0, 00:10:54.808 "data_size": 65536 00:10:54.808 }, 00:10:54.808 { 00:10:54.808 "name": "BaseBdev2", 00:10:54.808 "uuid": "4edb1e4c-8ef2-455f-911a-e3a58ddf1c6f", 00:10:54.808 "is_configured": true, 00:10:54.808 "data_offset": 0, 00:10:54.808 "data_size": 65536 00:10:54.808 }, 00:10:54.808 { 00:10:54.808 "name": "BaseBdev3", 00:10:54.808 "uuid": "8a33aecf-38de-4344-9aec-8b25e0defd36", 00:10:54.808 "is_configured": true, 00:10:54.808 "data_offset": 0, 00:10:54.808 "data_size": 65536 00:10:54.808 }, 00:10:54.808 { 00:10:54.808 "name": "BaseBdev4", 00:10:54.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.808 "is_configured": false, 00:10:54.808 "data_offset": 0, 00:10:54.808 "data_size": 0 00:10:54.808 } 00:10:54.808 ] 00:10:54.808 }' 00:10:54.808 14:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.808 14:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.068 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:55.068 14:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.068 14:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.068 [2024-12-09 14:43:33.188623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:55.068 [2024-12-09 14:43:33.188702] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:55.068 [2024-12-09 14:43:33.188713] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:55.327 [2024-12-09 14:43:33.189043] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:55.327 [2024-12-09 14:43:33.189275] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:55.327 [2024-12-09 14:43:33.189290] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:55.327 [2024-12-09 14:43:33.189674] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.327 BaseBdev4 00:10:55.327 14:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.327 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:55.327 14:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:55.327 14:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:55.327 14:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:55.327 14:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:55.327 14:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:55.327 14:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:55.327 14:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.327 14:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.327 14:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.327 14:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:55.327 14:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.327 14:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.327 [ 00:10:55.327 { 00:10:55.327 "name": "BaseBdev4", 00:10:55.327 "aliases": [ 00:10:55.327 "7f968e89-3dca-48af-8cc5-8e80b1768ec7" 00:10:55.327 ], 00:10:55.328 "product_name": "Malloc disk", 00:10:55.328 "block_size": 512, 00:10:55.328 "num_blocks": 65536, 00:10:55.328 "uuid": "7f968e89-3dca-48af-8cc5-8e80b1768ec7", 00:10:55.328 "assigned_rate_limits": { 00:10:55.328 "rw_ios_per_sec": 0, 00:10:55.328 "rw_mbytes_per_sec": 0, 00:10:55.328 "r_mbytes_per_sec": 0, 00:10:55.328 "w_mbytes_per_sec": 0 00:10:55.328 }, 00:10:55.328 "claimed": true, 00:10:55.328 "claim_type": "exclusive_write", 00:10:55.328 "zoned": false, 00:10:55.328 "supported_io_types": { 00:10:55.328 "read": true, 00:10:55.328 "write": true, 00:10:55.328 "unmap": true, 00:10:55.328 "flush": true, 00:10:55.328 "reset": true, 00:10:55.328 "nvme_admin": false, 00:10:55.328 "nvme_io": false, 00:10:55.328 "nvme_io_md": false, 00:10:55.328 "write_zeroes": true, 00:10:55.328 "zcopy": true, 00:10:55.328 "get_zone_info": false, 00:10:55.328 "zone_management": false, 00:10:55.328 "zone_append": false, 00:10:55.328 "compare": false, 00:10:55.328 "compare_and_write": false, 00:10:55.328 "abort": true, 00:10:55.328 "seek_hole": false, 00:10:55.328 "seek_data": false, 00:10:55.328 "copy": true, 00:10:55.328 "nvme_iov_md": false 00:10:55.328 }, 00:10:55.328 "memory_domains": [ 00:10:55.328 { 00:10:55.328 "dma_device_id": "system", 00:10:55.328 "dma_device_type": 1 00:10:55.328 }, 00:10:55.328 { 00:10:55.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.328 "dma_device_type": 2 00:10:55.328 } 00:10:55.328 ], 00:10:55.328 "driver_specific": {} 00:10:55.328 } 00:10:55.328 ] 00:10:55.328 14:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.328 14:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:55.328 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:55.328 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:55.328 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:55.328 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.328 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.328 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:55.328 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.328 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.328 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.328 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.328 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.328 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.328 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.328 14:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.328 14:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.328 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.328 14:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.328 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.328 "name": "Existed_Raid", 00:10:55.328 "uuid": "fb1ca53a-6932-41a5-88d6-c191d41871e5", 00:10:55.328 "strip_size_kb": 64, 00:10:55.328 "state": "online", 00:10:55.328 "raid_level": "concat", 00:10:55.328 "superblock": false, 00:10:55.328 "num_base_bdevs": 4, 00:10:55.328 "num_base_bdevs_discovered": 4, 00:10:55.328 "num_base_bdevs_operational": 4, 00:10:55.328 "base_bdevs_list": [ 00:10:55.328 { 00:10:55.328 "name": "BaseBdev1", 00:10:55.328 "uuid": "0475118f-ce56-4cf7-8db1-124015924ee3", 00:10:55.328 "is_configured": true, 00:10:55.328 "data_offset": 0, 00:10:55.328 "data_size": 65536 00:10:55.328 }, 00:10:55.328 { 00:10:55.328 "name": "BaseBdev2", 00:10:55.328 "uuid": "4edb1e4c-8ef2-455f-911a-e3a58ddf1c6f", 00:10:55.328 "is_configured": true, 00:10:55.328 "data_offset": 0, 00:10:55.328 "data_size": 65536 00:10:55.328 }, 00:10:55.328 { 00:10:55.328 "name": "BaseBdev3", 00:10:55.328 "uuid": "8a33aecf-38de-4344-9aec-8b25e0defd36", 00:10:55.328 "is_configured": true, 00:10:55.328 "data_offset": 0, 00:10:55.328 "data_size": 65536 00:10:55.328 }, 00:10:55.328 { 00:10:55.328 "name": "BaseBdev4", 00:10:55.328 "uuid": "7f968e89-3dca-48af-8cc5-8e80b1768ec7", 00:10:55.328 "is_configured": true, 00:10:55.328 "data_offset": 0, 00:10:55.328 "data_size": 65536 00:10:55.328 } 00:10:55.328 ] 00:10:55.328 }' 00:10:55.328 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.328 14:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.588 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:55.588 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:55.588 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:55.588 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:55.588 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:55.588 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:55.588 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:55.588 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:55.588 14:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.588 14:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.588 [2024-12-09 14:43:33.680275] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:55.588 14:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.848 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:55.848 "name": "Existed_Raid", 00:10:55.848 "aliases": [ 00:10:55.848 "fb1ca53a-6932-41a5-88d6-c191d41871e5" 00:10:55.848 ], 00:10:55.848 "product_name": "Raid Volume", 00:10:55.848 "block_size": 512, 00:10:55.848 "num_blocks": 262144, 00:10:55.848 "uuid": "fb1ca53a-6932-41a5-88d6-c191d41871e5", 00:10:55.848 "assigned_rate_limits": { 00:10:55.848 "rw_ios_per_sec": 0, 00:10:55.848 "rw_mbytes_per_sec": 0, 00:10:55.848 "r_mbytes_per_sec": 0, 00:10:55.848 "w_mbytes_per_sec": 0 00:10:55.848 }, 00:10:55.848 "claimed": false, 00:10:55.848 "zoned": false, 00:10:55.848 "supported_io_types": { 00:10:55.848 "read": true, 00:10:55.848 "write": true, 00:10:55.848 "unmap": true, 00:10:55.848 "flush": true, 00:10:55.848 "reset": true, 00:10:55.848 "nvme_admin": false, 00:10:55.848 "nvme_io": false, 00:10:55.848 "nvme_io_md": false, 00:10:55.848 "write_zeroes": true, 00:10:55.848 "zcopy": false, 00:10:55.848 "get_zone_info": false, 00:10:55.848 "zone_management": false, 00:10:55.848 "zone_append": false, 00:10:55.848 "compare": false, 00:10:55.848 "compare_and_write": false, 00:10:55.848 "abort": false, 00:10:55.848 "seek_hole": false, 00:10:55.848 "seek_data": false, 00:10:55.848 "copy": false, 00:10:55.848 "nvme_iov_md": false 00:10:55.848 }, 00:10:55.848 "memory_domains": [ 00:10:55.848 { 00:10:55.848 "dma_device_id": "system", 00:10:55.848 "dma_device_type": 1 00:10:55.848 }, 00:10:55.848 { 00:10:55.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.848 "dma_device_type": 2 00:10:55.848 }, 00:10:55.848 { 00:10:55.848 "dma_device_id": "system", 00:10:55.848 "dma_device_type": 1 00:10:55.848 }, 00:10:55.848 { 00:10:55.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.848 "dma_device_type": 2 00:10:55.848 }, 00:10:55.848 { 00:10:55.848 "dma_device_id": "system", 00:10:55.848 "dma_device_type": 1 00:10:55.848 }, 00:10:55.848 { 00:10:55.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.848 "dma_device_type": 2 00:10:55.848 }, 00:10:55.848 { 00:10:55.848 "dma_device_id": "system", 00:10:55.848 "dma_device_type": 1 00:10:55.848 }, 00:10:55.848 { 00:10:55.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.848 "dma_device_type": 2 00:10:55.848 } 00:10:55.848 ], 00:10:55.848 "driver_specific": { 00:10:55.848 "raid": { 00:10:55.848 "uuid": "fb1ca53a-6932-41a5-88d6-c191d41871e5", 00:10:55.848 "strip_size_kb": 64, 00:10:55.848 "state": "online", 00:10:55.848 "raid_level": "concat", 00:10:55.848 "superblock": false, 00:10:55.848 "num_base_bdevs": 4, 00:10:55.848 "num_base_bdevs_discovered": 4, 00:10:55.848 "num_base_bdevs_operational": 4, 00:10:55.849 "base_bdevs_list": [ 00:10:55.849 { 00:10:55.849 "name": "BaseBdev1", 00:10:55.849 "uuid": "0475118f-ce56-4cf7-8db1-124015924ee3", 00:10:55.849 "is_configured": true, 00:10:55.849 "data_offset": 0, 00:10:55.849 "data_size": 65536 00:10:55.849 }, 00:10:55.849 { 00:10:55.849 "name": "BaseBdev2", 00:10:55.849 "uuid": "4edb1e4c-8ef2-455f-911a-e3a58ddf1c6f", 00:10:55.849 "is_configured": true, 00:10:55.849 "data_offset": 0, 00:10:55.849 "data_size": 65536 00:10:55.849 }, 00:10:55.849 { 00:10:55.849 "name": "BaseBdev3", 00:10:55.849 "uuid": "8a33aecf-38de-4344-9aec-8b25e0defd36", 00:10:55.849 "is_configured": true, 00:10:55.849 "data_offset": 0, 00:10:55.849 "data_size": 65536 00:10:55.849 }, 00:10:55.849 { 00:10:55.849 "name": "BaseBdev4", 00:10:55.849 "uuid": "7f968e89-3dca-48af-8cc5-8e80b1768ec7", 00:10:55.849 "is_configured": true, 00:10:55.849 "data_offset": 0, 00:10:55.849 "data_size": 65536 00:10:55.849 } 00:10:55.849 ] 00:10:55.849 } 00:10:55.849 } 00:10:55.849 }' 00:10:55.849 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:55.849 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:55.849 BaseBdev2 00:10:55.849 BaseBdev3 00:10:55.849 BaseBdev4' 00:10:55.849 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.849 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:55.849 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.849 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:55.849 14:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.849 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.849 14:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.849 14:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.849 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.849 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.849 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.849 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:55.849 14:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.849 14:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.849 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.849 14:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.849 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.849 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.849 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.849 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:55.849 14:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.849 14:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.849 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.849 14:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.849 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.849 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.849 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.109 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:56.109 14:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.109 14:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.109 14:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.109 14:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.109 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.109 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.109 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:56.109 14:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.109 14:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.109 [2024-12-09 14:43:34.023416] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:56.109 [2024-12-09 14:43:34.023480] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:56.109 [2024-12-09 14:43:34.023555] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:56.109 14:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.109 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:56.109 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:56.109 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:56.109 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:56.109 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:56.109 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:56.109 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.109 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:56.109 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:56.109 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.109 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:56.109 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.109 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.109 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.109 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.109 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.109 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.109 14:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.110 14:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.110 14:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.110 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.110 "name": "Existed_Raid", 00:10:56.110 "uuid": "fb1ca53a-6932-41a5-88d6-c191d41871e5", 00:10:56.110 "strip_size_kb": 64, 00:10:56.110 "state": "offline", 00:10:56.110 "raid_level": "concat", 00:10:56.110 "superblock": false, 00:10:56.110 "num_base_bdevs": 4, 00:10:56.110 "num_base_bdevs_discovered": 3, 00:10:56.110 "num_base_bdevs_operational": 3, 00:10:56.110 "base_bdevs_list": [ 00:10:56.110 { 00:10:56.110 "name": null, 00:10:56.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.110 "is_configured": false, 00:10:56.110 "data_offset": 0, 00:10:56.110 "data_size": 65536 00:10:56.110 }, 00:10:56.110 { 00:10:56.110 "name": "BaseBdev2", 00:10:56.110 "uuid": "4edb1e4c-8ef2-455f-911a-e3a58ddf1c6f", 00:10:56.110 "is_configured": true, 00:10:56.110 "data_offset": 0, 00:10:56.110 "data_size": 65536 00:10:56.110 }, 00:10:56.110 { 00:10:56.110 "name": "BaseBdev3", 00:10:56.110 "uuid": "8a33aecf-38de-4344-9aec-8b25e0defd36", 00:10:56.110 "is_configured": true, 00:10:56.110 "data_offset": 0, 00:10:56.110 "data_size": 65536 00:10:56.110 }, 00:10:56.110 { 00:10:56.110 "name": "BaseBdev4", 00:10:56.110 "uuid": "7f968e89-3dca-48af-8cc5-8e80b1768ec7", 00:10:56.110 "is_configured": true, 00:10:56.110 "data_offset": 0, 00:10:56.110 "data_size": 65536 00:10:56.110 } 00:10:56.110 ] 00:10:56.110 }' 00:10:56.110 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.110 14:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.680 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:56.680 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:56.680 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:56.680 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.680 14:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.680 14:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.680 14:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.680 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:56.680 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:56.680 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:56.680 14:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.680 14:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.680 [2024-12-09 14:43:34.665123] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:56.680 14:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.680 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:56.680 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:56.680 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.680 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:56.680 14:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.680 14:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.939 14:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.939 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:56.939 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:56.939 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:56.939 14:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.939 14:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.939 [2024-12-09 14:43:34.832936] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:56.939 14:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.939 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:56.939 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:56.939 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.939 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:56.939 14:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.939 14:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.939 14:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.939 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:56.939 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:56.939 14:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:56.939 14:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.939 14:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.939 [2024-12-09 14:43:34.998381] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:56.939 [2024-12-09 14:43:34.998473] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.197 BaseBdev2 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.197 [ 00:10:57.197 { 00:10:57.197 "name": "BaseBdev2", 00:10:57.197 "aliases": [ 00:10:57.197 "d34f0180-37bb-4a40-aba5-2e2adbb1c5ce" 00:10:57.197 ], 00:10:57.197 "product_name": "Malloc disk", 00:10:57.197 "block_size": 512, 00:10:57.197 "num_blocks": 65536, 00:10:57.197 "uuid": "d34f0180-37bb-4a40-aba5-2e2adbb1c5ce", 00:10:57.197 "assigned_rate_limits": { 00:10:57.197 "rw_ios_per_sec": 0, 00:10:57.197 "rw_mbytes_per_sec": 0, 00:10:57.197 "r_mbytes_per_sec": 0, 00:10:57.197 "w_mbytes_per_sec": 0 00:10:57.197 }, 00:10:57.197 "claimed": false, 00:10:57.197 "zoned": false, 00:10:57.197 "supported_io_types": { 00:10:57.197 "read": true, 00:10:57.197 "write": true, 00:10:57.197 "unmap": true, 00:10:57.197 "flush": true, 00:10:57.197 "reset": true, 00:10:57.197 "nvme_admin": false, 00:10:57.197 "nvme_io": false, 00:10:57.197 "nvme_io_md": false, 00:10:57.197 "write_zeroes": true, 00:10:57.197 "zcopy": true, 00:10:57.197 "get_zone_info": false, 00:10:57.197 "zone_management": false, 00:10:57.197 "zone_append": false, 00:10:57.197 "compare": false, 00:10:57.197 "compare_and_write": false, 00:10:57.197 "abort": true, 00:10:57.197 "seek_hole": false, 00:10:57.197 "seek_data": false, 00:10:57.197 "copy": true, 00:10:57.197 "nvme_iov_md": false 00:10:57.197 }, 00:10:57.197 "memory_domains": [ 00:10:57.197 { 00:10:57.197 "dma_device_id": "system", 00:10:57.197 "dma_device_type": 1 00:10:57.197 }, 00:10:57.197 { 00:10:57.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.197 "dma_device_type": 2 00:10:57.197 } 00:10:57.197 ], 00:10:57.197 "driver_specific": {} 00:10:57.197 } 00:10:57.197 ] 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.197 BaseBdev3 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.197 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.456 [ 00:10:57.456 { 00:10:57.456 "name": "BaseBdev3", 00:10:57.457 "aliases": [ 00:10:57.457 "bf46b515-fd91-4ea5-a3b9-77962a848851" 00:10:57.457 ], 00:10:57.457 "product_name": "Malloc disk", 00:10:57.457 "block_size": 512, 00:10:57.457 "num_blocks": 65536, 00:10:57.457 "uuid": "bf46b515-fd91-4ea5-a3b9-77962a848851", 00:10:57.457 "assigned_rate_limits": { 00:10:57.457 "rw_ios_per_sec": 0, 00:10:57.457 "rw_mbytes_per_sec": 0, 00:10:57.457 "r_mbytes_per_sec": 0, 00:10:57.457 "w_mbytes_per_sec": 0 00:10:57.457 }, 00:10:57.457 "claimed": false, 00:10:57.457 "zoned": false, 00:10:57.457 "supported_io_types": { 00:10:57.457 "read": true, 00:10:57.457 "write": true, 00:10:57.457 "unmap": true, 00:10:57.457 "flush": true, 00:10:57.457 "reset": true, 00:10:57.457 "nvme_admin": false, 00:10:57.457 "nvme_io": false, 00:10:57.457 "nvme_io_md": false, 00:10:57.457 "write_zeroes": true, 00:10:57.457 "zcopy": true, 00:10:57.457 "get_zone_info": false, 00:10:57.457 "zone_management": false, 00:10:57.457 "zone_append": false, 00:10:57.457 "compare": false, 00:10:57.457 "compare_and_write": false, 00:10:57.457 "abort": true, 00:10:57.457 "seek_hole": false, 00:10:57.457 "seek_data": false, 00:10:57.457 "copy": true, 00:10:57.457 "nvme_iov_md": false 00:10:57.457 }, 00:10:57.457 "memory_domains": [ 00:10:57.457 { 00:10:57.457 "dma_device_id": "system", 00:10:57.457 "dma_device_type": 1 00:10:57.457 }, 00:10:57.457 { 00:10:57.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.457 "dma_device_type": 2 00:10:57.457 } 00:10:57.457 ], 00:10:57.457 "driver_specific": {} 00:10:57.457 } 00:10:57.457 ] 00:10:57.457 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.457 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:57.457 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:57.457 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:57.457 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:57.457 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.457 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.457 BaseBdev4 00:10:57.457 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.457 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:57.457 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:57.457 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:57.457 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:57.457 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:57.457 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:57.457 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:57.457 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.457 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.457 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.457 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:57.457 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.457 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.457 [ 00:10:57.457 { 00:10:57.457 "name": "BaseBdev4", 00:10:57.457 "aliases": [ 00:10:57.457 "b0dcac3d-0506-48e0-b8a1-1f311930125e" 00:10:57.457 ], 00:10:57.457 "product_name": "Malloc disk", 00:10:57.457 "block_size": 512, 00:10:57.457 "num_blocks": 65536, 00:10:57.457 "uuid": "b0dcac3d-0506-48e0-b8a1-1f311930125e", 00:10:57.457 "assigned_rate_limits": { 00:10:57.457 "rw_ios_per_sec": 0, 00:10:57.457 "rw_mbytes_per_sec": 0, 00:10:57.457 "r_mbytes_per_sec": 0, 00:10:57.457 "w_mbytes_per_sec": 0 00:10:57.457 }, 00:10:57.457 "claimed": false, 00:10:57.457 "zoned": false, 00:10:57.457 "supported_io_types": { 00:10:57.457 "read": true, 00:10:57.457 "write": true, 00:10:57.457 "unmap": true, 00:10:57.457 "flush": true, 00:10:57.457 "reset": true, 00:10:57.457 "nvme_admin": false, 00:10:57.457 "nvme_io": false, 00:10:57.457 "nvme_io_md": false, 00:10:57.457 "write_zeroes": true, 00:10:57.457 "zcopy": true, 00:10:57.457 "get_zone_info": false, 00:10:57.457 "zone_management": false, 00:10:57.457 "zone_append": false, 00:10:57.457 "compare": false, 00:10:57.457 "compare_and_write": false, 00:10:57.457 "abort": true, 00:10:57.457 "seek_hole": false, 00:10:57.457 "seek_data": false, 00:10:57.457 "copy": true, 00:10:57.457 "nvme_iov_md": false 00:10:57.457 }, 00:10:57.457 "memory_domains": [ 00:10:57.457 { 00:10:57.457 "dma_device_id": "system", 00:10:57.457 "dma_device_type": 1 00:10:57.457 }, 00:10:57.457 { 00:10:57.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.457 "dma_device_type": 2 00:10:57.457 } 00:10:57.457 ], 00:10:57.457 "driver_specific": {} 00:10:57.457 } 00:10:57.457 ] 00:10:57.457 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.457 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:57.457 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:57.457 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:57.457 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:57.457 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.457 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.457 [2024-12-09 14:43:35.429522] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:57.457 [2024-12-09 14:43:35.429701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:57.457 [2024-12-09 14:43:35.429764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:57.457 [2024-12-09 14:43:35.432208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:57.457 [2024-12-09 14:43:35.432331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:57.457 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.457 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:57.457 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.457 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.457 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:57.457 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.457 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.457 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.458 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.458 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.458 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.458 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.458 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.458 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.458 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.458 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.458 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.458 "name": "Existed_Raid", 00:10:57.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.458 "strip_size_kb": 64, 00:10:57.458 "state": "configuring", 00:10:57.458 "raid_level": "concat", 00:10:57.458 "superblock": false, 00:10:57.458 "num_base_bdevs": 4, 00:10:57.458 "num_base_bdevs_discovered": 3, 00:10:57.458 "num_base_bdevs_operational": 4, 00:10:57.458 "base_bdevs_list": [ 00:10:57.458 { 00:10:57.458 "name": "BaseBdev1", 00:10:57.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.458 "is_configured": false, 00:10:57.458 "data_offset": 0, 00:10:57.458 "data_size": 0 00:10:57.458 }, 00:10:57.458 { 00:10:57.458 "name": "BaseBdev2", 00:10:57.458 "uuid": "d34f0180-37bb-4a40-aba5-2e2adbb1c5ce", 00:10:57.458 "is_configured": true, 00:10:57.458 "data_offset": 0, 00:10:57.458 "data_size": 65536 00:10:57.458 }, 00:10:57.458 { 00:10:57.458 "name": "BaseBdev3", 00:10:57.458 "uuid": "bf46b515-fd91-4ea5-a3b9-77962a848851", 00:10:57.458 "is_configured": true, 00:10:57.458 "data_offset": 0, 00:10:57.458 "data_size": 65536 00:10:57.458 }, 00:10:57.458 { 00:10:57.458 "name": "BaseBdev4", 00:10:57.458 "uuid": "b0dcac3d-0506-48e0-b8a1-1f311930125e", 00:10:57.458 "is_configured": true, 00:10:57.458 "data_offset": 0, 00:10:57.458 "data_size": 65536 00:10:57.458 } 00:10:57.458 ] 00:10:57.458 }' 00:10:57.458 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.458 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.717 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:57.717 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.717 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.717 [2024-12-09 14:43:35.804963] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:57.717 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.718 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:57.718 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.718 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.718 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:57.718 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.718 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.718 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.718 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.718 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.718 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.718 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.718 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.718 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.718 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.718 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.978 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.978 "name": "Existed_Raid", 00:10:57.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.978 "strip_size_kb": 64, 00:10:57.978 "state": "configuring", 00:10:57.978 "raid_level": "concat", 00:10:57.978 "superblock": false, 00:10:57.978 "num_base_bdevs": 4, 00:10:57.978 "num_base_bdevs_discovered": 2, 00:10:57.978 "num_base_bdevs_operational": 4, 00:10:57.978 "base_bdevs_list": [ 00:10:57.978 { 00:10:57.978 "name": "BaseBdev1", 00:10:57.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.978 "is_configured": false, 00:10:57.978 "data_offset": 0, 00:10:57.978 "data_size": 0 00:10:57.978 }, 00:10:57.978 { 00:10:57.978 "name": null, 00:10:57.978 "uuid": "d34f0180-37bb-4a40-aba5-2e2adbb1c5ce", 00:10:57.978 "is_configured": false, 00:10:57.978 "data_offset": 0, 00:10:57.978 "data_size": 65536 00:10:57.978 }, 00:10:57.978 { 00:10:57.978 "name": "BaseBdev3", 00:10:57.978 "uuid": "bf46b515-fd91-4ea5-a3b9-77962a848851", 00:10:57.978 "is_configured": true, 00:10:57.978 "data_offset": 0, 00:10:57.978 "data_size": 65536 00:10:57.978 }, 00:10:57.978 { 00:10:57.978 "name": "BaseBdev4", 00:10:57.978 "uuid": "b0dcac3d-0506-48e0-b8a1-1f311930125e", 00:10:57.978 "is_configured": true, 00:10:57.978 "data_offset": 0, 00:10:57.978 "data_size": 65536 00:10:57.978 } 00:10:57.978 ] 00:10:57.978 }' 00:10:57.978 14:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.978 14:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.237 14:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.237 14:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.237 14:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.237 14:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:58.237 14:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.237 14:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:58.237 14:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:58.237 14:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.237 14:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.497 [2024-12-09 14:43:36.368791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:58.497 BaseBdev1 00:10:58.498 14:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.498 14:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:58.498 14:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:58.498 14:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:58.498 14:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:58.498 14:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:58.498 14:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:58.498 14:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:58.498 14:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.498 14:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.498 14:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.498 14:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:58.498 14:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.498 14:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.498 [ 00:10:58.498 { 00:10:58.498 "name": "BaseBdev1", 00:10:58.498 "aliases": [ 00:10:58.498 "90864f30-9b65-4062-98bf-68306a057393" 00:10:58.498 ], 00:10:58.498 "product_name": "Malloc disk", 00:10:58.498 "block_size": 512, 00:10:58.498 "num_blocks": 65536, 00:10:58.498 "uuid": "90864f30-9b65-4062-98bf-68306a057393", 00:10:58.498 "assigned_rate_limits": { 00:10:58.498 "rw_ios_per_sec": 0, 00:10:58.498 "rw_mbytes_per_sec": 0, 00:10:58.498 "r_mbytes_per_sec": 0, 00:10:58.498 "w_mbytes_per_sec": 0 00:10:58.498 }, 00:10:58.498 "claimed": true, 00:10:58.498 "claim_type": "exclusive_write", 00:10:58.498 "zoned": false, 00:10:58.498 "supported_io_types": { 00:10:58.498 "read": true, 00:10:58.498 "write": true, 00:10:58.498 "unmap": true, 00:10:58.498 "flush": true, 00:10:58.498 "reset": true, 00:10:58.498 "nvme_admin": false, 00:10:58.498 "nvme_io": false, 00:10:58.498 "nvme_io_md": false, 00:10:58.498 "write_zeroes": true, 00:10:58.498 "zcopy": true, 00:10:58.498 "get_zone_info": false, 00:10:58.498 "zone_management": false, 00:10:58.498 "zone_append": false, 00:10:58.498 "compare": false, 00:10:58.498 "compare_and_write": false, 00:10:58.498 "abort": true, 00:10:58.498 "seek_hole": false, 00:10:58.498 "seek_data": false, 00:10:58.498 "copy": true, 00:10:58.498 "nvme_iov_md": false 00:10:58.498 }, 00:10:58.498 "memory_domains": [ 00:10:58.498 { 00:10:58.498 "dma_device_id": "system", 00:10:58.498 "dma_device_type": 1 00:10:58.498 }, 00:10:58.498 { 00:10:58.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.498 "dma_device_type": 2 00:10:58.498 } 00:10:58.498 ], 00:10:58.498 "driver_specific": {} 00:10:58.498 } 00:10:58.498 ] 00:10:58.498 14:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.498 14:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:58.498 14:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:58.498 14:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.498 14:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.498 14:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:58.498 14:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.498 14:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.498 14:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.498 14:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.498 14:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.498 14:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.498 14:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.498 14:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.498 14:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.498 14:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.498 14:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.498 14:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.498 "name": "Existed_Raid", 00:10:58.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.498 "strip_size_kb": 64, 00:10:58.498 "state": "configuring", 00:10:58.498 "raid_level": "concat", 00:10:58.498 "superblock": false, 00:10:58.498 "num_base_bdevs": 4, 00:10:58.498 "num_base_bdevs_discovered": 3, 00:10:58.498 "num_base_bdevs_operational": 4, 00:10:58.498 "base_bdevs_list": [ 00:10:58.498 { 00:10:58.498 "name": "BaseBdev1", 00:10:58.498 "uuid": "90864f30-9b65-4062-98bf-68306a057393", 00:10:58.498 "is_configured": true, 00:10:58.498 "data_offset": 0, 00:10:58.498 "data_size": 65536 00:10:58.498 }, 00:10:58.498 { 00:10:58.498 "name": null, 00:10:58.498 "uuid": "d34f0180-37bb-4a40-aba5-2e2adbb1c5ce", 00:10:58.498 "is_configured": false, 00:10:58.498 "data_offset": 0, 00:10:58.498 "data_size": 65536 00:10:58.498 }, 00:10:58.498 { 00:10:58.498 "name": "BaseBdev3", 00:10:58.498 "uuid": "bf46b515-fd91-4ea5-a3b9-77962a848851", 00:10:58.498 "is_configured": true, 00:10:58.498 "data_offset": 0, 00:10:58.498 "data_size": 65536 00:10:58.498 }, 00:10:58.498 { 00:10:58.498 "name": "BaseBdev4", 00:10:58.498 "uuid": "b0dcac3d-0506-48e0-b8a1-1f311930125e", 00:10:58.498 "is_configured": true, 00:10:58.498 "data_offset": 0, 00:10:58.498 "data_size": 65536 00:10:58.498 } 00:10:58.498 ] 00:10:58.498 }' 00:10:58.498 14:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.498 14:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.068 14:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.068 14:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:59.068 14:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.068 14:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.068 14:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.068 14:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:59.068 14:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:59.068 14:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.068 14:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.068 [2024-12-09 14:43:36.952006] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:59.068 14:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.068 14:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:59.068 14:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.068 14:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.068 14:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:59.068 14:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.068 14:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.068 14:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.068 14:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.068 14:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.068 14:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.068 14:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.068 14:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.068 14:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.068 14:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.068 14:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.068 14:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.068 "name": "Existed_Raid", 00:10:59.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.068 "strip_size_kb": 64, 00:10:59.068 "state": "configuring", 00:10:59.068 "raid_level": "concat", 00:10:59.068 "superblock": false, 00:10:59.069 "num_base_bdevs": 4, 00:10:59.069 "num_base_bdevs_discovered": 2, 00:10:59.069 "num_base_bdevs_operational": 4, 00:10:59.069 "base_bdevs_list": [ 00:10:59.069 { 00:10:59.069 "name": "BaseBdev1", 00:10:59.069 "uuid": "90864f30-9b65-4062-98bf-68306a057393", 00:10:59.069 "is_configured": true, 00:10:59.069 "data_offset": 0, 00:10:59.069 "data_size": 65536 00:10:59.069 }, 00:10:59.069 { 00:10:59.069 "name": null, 00:10:59.069 "uuid": "d34f0180-37bb-4a40-aba5-2e2adbb1c5ce", 00:10:59.069 "is_configured": false, 00:10:59.069 "data_offset": 0, 00:10:59.069 "data_size": 65536 00:10:59.069 }, 00:10:59.069 { 00:10:59.069 "name": null, 00:10:59.069 "uuid": "bf46b515-fd91-4ea5-a3b9-77962a848851", 00:10:59.069 "is_configured": false, 00:10:59.069 "data_offset": 0, 00:10:59.069 "data_size": 65536 00:10:59.069 }, 00:10:59.069 { 00:10:59.069 "name": "BaseBdev4", 00:10:59.069 "uuid": "b0dcac3d-0506-48e0-b8a1-1f311930125e", 00:10:59.069 "is_configured": true, 00:10:59.069 "data_offset": 0, 00:10:59.069 "data_size": 65536 00:10:59.069 } 00:10:59.069 ] 00:10:59.069 }' 00:10:59.069 14:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.069 14:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.329 14:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.329 14:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.329 14:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.329 14:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:59.329 14:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.329 14:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:59.329 14:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:59.329 14:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.329 14:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.329 [2024-12-09 14:43:37.439181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:59.329 14:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.329 14:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:59.329 14:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.329 14:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.329 14:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:59.329 14:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.329 14:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.329 14:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.329 14:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.329 14:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.329 14:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.589 14:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.589 14:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.589 14:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.589 14:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.589 14:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.589 14:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.589 "name": "Existed_Raid", 00:10:59.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.590 "strip_size_kb": 64, 00:10:59.590 "state": "configuring", 00:10:59.590 "raid_level": "concat", 00:10:59.590 "superblock": false, 00:10:59.590 "num_base_bdevs": 4, 00:10:59.590 "num_base_bdevs_discovered": 3, 00:10:59.590 "num_base_bdevs_operational": 4, 00:10:59.590 "base_bdevs_list": [ 00:10:59.590 { 00:10:59.590 "name": "BaseBdev1", 00:10:59.590 "uuid": "90864f30-9b65-4062-98bf-68306a057393", 00:10:59.590 "is_configured": true, 00:10:59.590 "data_offset": 0, 00:10:59.590 "data_size": 65536 00:10:59.590 }, 00:10:59.590 { 00:10:59.590 "name": null, 00:10:59.590 "uuid": "d34f0180-37bb-4a40-aba5-2e2adbb1c5ce", 00:10:59.590 "is_configured": false, 00:10:59.590 "data_offset": 0, 00:10:59.590 "data_size": 65536 00:10:59.590 }, 00:10:59.590 { 00:10:59.590 "name": "BaseBdev3", 00:10:59.590 "uuid": "bf46b515-fd91-4ea5-a3b9-77962a848851", 00:10:59.590 "is_configured": true, 00:10:59.590 "data_offset": 0, 00:10:59.590 "data_size": 65536 00:10:59.590 }, 00:10:59.590 { 00:10:59.590 "name": "BaseBdev4", 00:10:59.590 "uuid": "b0dcac3d-0506-48e0-b8a1-1f311930125e", 00:10:59.590 "is_configured": true, 00:10:59.590 "data_offset": 0, 00:10:59.590 "data_size": 65536 00:10:59.590 } 00:10:59.590 ] 00:10:59.590 }' 00:10:59.590 14:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.590 14:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.850 14:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:59.850 14:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.850 14:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.850 14:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.850 14:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.850 14:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:59.850 14:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:59.850 14:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.850 14:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.850 [2024-12-09 14:43:37.890520] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:00.109 14:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.109 14:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:00.109 14:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.109 14:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.109 14:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.109 14:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.109 14:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.109 14:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.109 14:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.109 14:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.109 14:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.109 14:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.109 14:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.109 14:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.109 14:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.109 14:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.109 14:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.109 "name": "Existed_Raid", 00:11:00.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.110 "strip_size_kb": 64, 00:11:00.110 "state": "configuring", 00:11:00.110 "raid_level": "concat", 00:11:00.110 "superblock": false, 00:11:00.110 "num_base_bdevs": 4, 00:11:00.110 "num_base_bdevs_discovered": 2, 00:11:00.110 "num_base_bdevs_operational": 4, 00:11:00.110 "base_bdevs_list": [ 00:11:00.110 { 00:11:00.110 "name": null, 00:11:00.110 "uuid": "90864f30-9b65-4062-98bf-68306a057393", 00:11:00.110 "is_configured": false, 00:11:00.110 "data_offset": 0, 00:11:00.110 "data_size": 65536 00:11:00.110 }, 00:11:00.110 { 00:11:00.110 "name": null, 00:11:00.110 "uuid": "d34f0180-37bb-4a40-aba5-2e2adbb1c5ce", 00:11:00.110 "is_configured": false, 00:11:00.110 "data_offset": 0, 00:11:00.110 "data_size": 65536 00:11:00.110 }, 00:11:00.110 { 00:11:00.110 "name": "BaseBdev3", 00:11:00.110 "uuid": "bf46b515-fd91-4ea5-a3b9-77962a848851", 00:11:00.110 "is_configured": true, 00:11:00.110 "data_offset": 0, 00:11:00.110 "data_size": 65536 00:11:00.110 }, 00:11:00.110 { 00:11:00.110 "name": "BaseBdev4", 00:11:00.110 "uuid": "b0dcac3d-0506-48e0-b8a1-1f311930125e", 00:11:00.110 "is_configured": true, 00:11:00.110 "data_offset": 0, 00:11:00.110 "data_size": 65536 00:11:00.110 } 00:11:00.110 ] 00:11:00.110 }' 00:11:00.110 14:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.110 14:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.429 14:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:00.429 14:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.429 14:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.429 14:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.429 14:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.429 14:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:00.429 14:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:00.429 14:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.429 14:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.429 [2024-12-09 14:43:38.408245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:00.429 14:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.429 14:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:00.429 14:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.429 14:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.430 14:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.430 14:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.430 14:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.430 14:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.430 14:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.430 14:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.430 14:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.430 14:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.430 14:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.430 14:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.430 14:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.430 14:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.430 14:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.430 "name": "Existed_Raid", 00:11:00.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.430 "strip_size_kb": 64, 00:11:00.430 "state": "configuring", 00:11:00.430 "raid_level": "concat", 00:11:00.430 "superblock": false, 00:11:00.430 "num_base_bdevs": 4, 00:11:00.430 "num_base_bdevs_discovered": 3, 00:11:00.430 "num_base_bdevs_operational": 4, 00:11:00.430 "base_bdevs_list": [ 00:11:00.430 { 00:11:00.430 "name": null, 00:11:00.430 "uuid": "90864f30-9b65-4062-98bf-68306a057393", 00:11:00.430 "is_configured": false, 00:11:00.430 "data_offset": 0, 00:11:00.430 "data_size": 65536 00:11:00.430 }, 00:11:00.430 { 00:11:00.430 "name": "BaseBdev2", 00:11:00.430 "uuid": "d34f0180-37bb-4a40-aba5-2e2adbb1c5ce", 00:11:00.430 "is_configured": true, 00:11:00.430 "data_offset": 0, 00:11:00.430 "data_size": 65536 00:11:00.430 }, 00:11:00.430 { 00:11:00.430 "name": "BaseBdev3", 00:11:00.430 "uuid": "bf46b515-fd91-4ea5-a3b9-77962a848851", 00:11:00.430 "is_configured": true, 00:11:00.430 "data_offset": 0, 00:11:00.430 "data_size": 65536 00:11:00.430 }, 00:11:00.430 { 00:11:00.430 "name": "BaseBdev4", 00:11:00.430 "uuid": "b0dcac3d-0506-48e0-b8a1-1f311930125e", 00:11:00.430 "is_configured": true, 00:11:00.430 "data_offset": 0, 00:11:00.430 "data_size": 65536 00:11:00.430 } 00:11:00.430 ] 00:11:00.430 }' 00:11:00.430 14:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.430 14:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.998 14:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.998 14:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:00.998 14:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.998 14:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.998 14:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.998 14:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:00.998 14:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.998 14:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.998 14:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.998 14:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:00.998 14:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.998 14:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 90864f30-9b65-4062-98bf-68306a057393 00:11:00.998 14:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.998 14:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.998 [2024-12-09 14:43:39.014671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:00.998 [2024-12-09 14:43:39.014844] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:00.998 [2024-12-09 14:43:39.014894] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:00.998 [2024-12-09 14:43:39.015280] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:00.998 [2024-12-09 14:43:39.015499] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:00.998 [2024-12-09 14:43:39.015550] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:00.998 NewBaseBdev 00:11:00.998 [2024-12-09 14:43:39.015937] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.998 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.998 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:00.998 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:00.998 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:00.998 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:00.998 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:00.998 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:00.998 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:00.998 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.998 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.998 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.998 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:00.998 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.998 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.998 [ 00:11:00.998 { 00:11:00.998 "name": "NewBaseBdev", 00:11:00.998 "aliases": [ 00:11:00.998 "90864f30-9b65-4062-98bf-68306a057393" 00:11:00.998 ], 00:11:00.998 "product_name": "Malloc disk", 00:11:00.998 "block_size": 512, 00:11:00.998 "num_blocks": 65536, 00:11:00.998 "uuid": "90864f30-9b65-4062-98bf-68306a057393", 00:11:00.998 "assigned_rate_limits": { 00:11:00.998 "rw_ios_per_sec": 0, 00:11:00.998 "rw_mbytes_per_sec": 0, 00:11:00.998 "r_mbytes_per_sec": 0, 00:11:00.998 "w_mbytes_per_sec": 0 00:11:00.998 }, 00:11:00.998 "claimed": true, 00:11:00.998 "claim_type": "exclusive_write", 00:11:00.998 "zoned": false, 00:11:00.998 "supported_io_types": { 00:11:00.998 "read": true, 00:11:00.998 "write": true, 00:11:00.998 "unmap": true, 00:11:00.998 "flush": true, 00:11:00.998 "reset": true, 00:11:00.998 "nvme_admin": false, 00:11:00.998 "nvme_io": false, 00:11:00.998 "nvme_io_md": false, 00:11:00.998 "write_zeroes": true, 00:11:00.998 "zcopy": true, 00:11:00.998 "get_zone_info": false, 00:11:00.999 "zone_management": false, 00:11:00.999 "zone_append": false, 00:11:00.999 "compare": false, 00:11:00.999 "compare_and_write": false, 00:11:00.999 "abort": true, 00:11:00.999 "seek_hole": false, 00:11:00.999 "seek_data": false, 00:11:00.999 "copy": true, 00:11:00.999 "nvme_iov_md": false 00:11:00.999 }, 00:11:00.999 "memory_domains": [ 00:11:00.999 { 00:11:00.999 "dma_device_id": "system", 00:11:00.999 "dma_device_type": 1 00:11:00.999 }, 00:11:00.999 { 00:11:00.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.999 "dma_device_type": 2 00:11:00.999 } 00:11:00.999 ], 00:11:00.999 "driver_specific": {} 00:11:00.999 } 00:11:00.999 ] 00:11:00.999 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.999 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:00.999 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:00.999 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.999 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.999 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.999 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.999 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.999 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.999 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.999 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.999 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.999 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.999 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.999 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.999 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.999 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.999 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.999 "name": "Existed_Raid", 00:11:00.999 "uuid": "2426634f-8c3f-47a1-a839-49d127d38584", 00:11:00.999 "strip_size_kb": 64, 00:11:00.999 "state": "online", 00:11:00.999 "raid_level": "concat", 00:11:00.999 "superblock": false, 00:11:00.999 "num_base_bdevs": 4, 00:11:00.999 "num_base_bdevs_discovered": 4, 00:11:00.999 "num_base_bdevs_operational": 4, 00:11:00.999 "base_bdevs_list": [ 00:11:00.999 { 00:11:00.999 "name": "NewBaseBdev", 00:11:00.999 "uuid": "90864f30-9b65-4062-98bf-68306a057393", 00:11:00.999 "is_configured": true, 00:11:00.999 "data_offset": 0, 00:11:00.999 "data_size": 65536 00:11:00.999 }, 00:11:00.999 { 00:11:00.999 "name": "BaseBdev2", 00:11:00.999 "uuid": "d34f0180-37bb-4a40-aba5-2e2adbb1c5ce", 00:11:00.999 "is_configured": true, 00:11:00.999 "data_offset": 0, 00:11:00.999 "data_size": 65536 00:11:00.999 }, 00:11:00.999 { 00:11:00.999 "name": "BaseBdev3", 00:11:00.999 "uuid": "bf46b515-fd91-4ea5-a3b9-77962a848851", 00:11:00.999 "is_configured": true, 00:11:00.999 "data_offset": 0, 00:11:00.999 "data_size": 65536 00:11:00.999 }, 00:11:00.999 { 00:11:00.999 "name": "BaseBdev4", 00:11:00.999 "uuid": "b0dcac3d-0506-48e0-b8a1-1f311930125e", 00:11:00.999 "is_configured": true, 00:11:00.999 "data_offset": 0, 00:11:00.999 "data_size": 65536 00:11:00.999 } 00:11:00.999 ] 00:11:00.999 }' 00:11:00.999 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.999 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.568 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:01.568 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:01.568 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:01.568 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:01.568 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:01.568 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:01.568 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:01.568 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.568 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:01.568 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.568 [2024-12-09 14:43:39.438577] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:01.568 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.568 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:01.568 "name": "Existed_Raid", 00:11:01.568 "aliases": [ 00:11:01.568 "2426634f-8c3f-47a1-a839-49d127d38584" 00:11:01.568 ], 00:11:01.568 "product_name": "Raid Volume", 00:11:01.568 "block_size": 512, 00:11:01.568 "num_blocks": 262144, 00:11:01.568 "uuid": "2426634f-8c3f-47a1-a839-49d127d38584", 00:11:01.568 "assigned_rate_limits": { 00:11:01.569 "rw_ios_per_sec": 0, 00:11:01.569 "rw_mbytes_per_sec": 0, 00:11:01.569 "r_mbytes_per_sec": 0, 00:11:01.569 "w_mbytes_per_sec": 0 00:11:01.569 }, 00:11:01.569 "claimed": false, 00:11:01.569 "zoned": false, 00:11:01.569 "supported_io_types": { 00:11:01.569 "read": true, 00:11:01.569 "write": true, 00:11:01.569 "unmap": true, 00:11:01.569 "flush": true, 00:11:01.569 "reset": true, 00:11:01.569 "nvme_admin": false, 00:11:01.569 "nvme_io": false, 00:11:01.569 "nvme_io_md": false, 00:11:01.569 "write_zeroes": true, 00:11:01.569 "zcopy": false, 00:11:01.569 "get_zone_info": false, 00:11:01.569 "zone_management": false, 00:11:01.569 "zone_append": false, 00:11:01.569 "compare": false, 00:11:01.569 "compare_and_write": false, 00:11:01.569 "abort": false, 00:11:01.569 "seek_hole": false, 00:11:01.569 "seek_data": false, 00:11:01.569 "copy": false, 00:11:01.569 "nvme_iov_md": false 00:11:01.569 }, 00:11:01.569 "memory_domains": [ 00:11:01.569 { 00:11:01.569 "dma_device_id": "system", 00:11:01.569 "dma_device_type": 1 00:11:01.569 }, 00:11:01.569 { 00:11:01.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.569 "dma_device_type": 2 00:11:01.569 }, 00:11:01.569 { 00:11:01.569 "dma_device_id": "system", 00:11:01.569 "dma_device_type": 1 00:11:01.569 }, 00:11:01.569 { 00:11:01.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.569 "dma_device_type": 2 00:11:01.569 }, 00:11:01.569 { 00:11:01.569 "dma_device_id": "system", 00:11:01.569 "dma_device_type": 1 00:11:01.569 }, 00:11:01.569 { 00:11:01.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.569 "dma_device_type": 2 00:11:01.569 }, 00:11:01.569 { 00:11:01.569 "dma_device_id": "system", 00:11:01.569 "dma_device_type": 1 00:11:01.569 }, 00:11:01.569 { 00:11:01.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.569 "dma_device_type": 2 00:11:01.569 } 00:11:01.569 ], 00:11:01.569 "driver_specific": { 00:11:01.569 "raid": { 00:11:01.569 "uuid": "2426634f-8c3f-47a1-a839-49d127d38584", 00:11:01.569 "strip_size_kb": 64, 00:11:01.569 "state": "online", 00:11:01.569 "raid_level": "concat", 00:11:01.569 "superblock": false, 00:11:01.569 "num_base_bdevs": 4, 00:11:01.569 "num_base_bdevs_discovered": 4, 00:11:01.569 "num_base_bdevs_operational": 4, 00:11:01.569 "base_bdevs_list": [ 00:11:01.569 { 00:11:01.569 "name": "NewBaseBdev", 00:11:01.569 "uuid": "90864f30-9b65-4062-98bf-68306a057393", 00:11:01.569 "is_configured": true, 00:11:01.569 "data_offset": 0, 00:11:01.569 "data_size": 65536 00:11:01.569 }, 00:11:01.569 { 00:11:01.569 "name": "BaseBdev2", 00:11:01.569 "uuid": "d34f0180-37bb-4a40-aba5-2e2adbb1c5ce", 00:11:01.569 "is_configured": true, 00:11:01.569 "data_offset": 0, 00:11:01.569 "data_size": 65536 00:11:01.569 }, 00:11:01.569 { 00:11:01.569 "name": "BaseBdev3", 00:11:01.569 "uuid": "bf46b515-fd91-4ea5-a3b9-77962a848851", 00:11:01.569 "is_configured": true, 00:11:01.569 "data_offset": 0, 00:11:01.569 "data_size": 65536 00:11:01.569 }, 00:11:01.569 { 00:11:01.569 "name": "BaseBdev4", 00:11:01.569 "uuid": "b0dcac3d-0506-48e0-b8a1-1f311930125e", 00:11:01.569 "is_configured": true, 00:11:01.569 "data_offset": 0, 00:11:01.569 "data_size": 65536 00:11:01.569 } 00:11:01.569 ] 00:11:01.569 } 00:11:01.569 } 00:11:01.569 }' 00:11:01.569 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:01.569 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:01.569 BaseBdev2 00:11:01.569 BaseBdev3 00:11:01.569 BaseBdev4' 00:11:01.569 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.569 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:01.569 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.569 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:01.569 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.569 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.569 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.569 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.569 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.569 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.569 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.569 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:01.569 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.569 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.569 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.569 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.829 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.829 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.829 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.829 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:01.829 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.829 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.829 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.829 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.829 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.829 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.829 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.829 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.829 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:01.829 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.829 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.830 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.830 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.830 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.830 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:01.830 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.830 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.830 [2024-12-09 14:43:39.777628] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:01.830 [2024-12-09 14:43:39.777719] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:01.830 [2024-12-09 14:43:39.777842] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:01.830 [2024-12-09 14:43:39.777933] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:01.830 [2024-12-09 14:43:39.777945] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:01.830 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.830 14:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72573 00:11:01.830 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 72573 ']' 00:11:01.830 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 72573 00:11:01.830 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:01.830 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:01.830 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72573 00:11:01.830 killing process with pid 72573 00:11:01.830 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:01.830 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:01.830 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72573' 00:11:01.830 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 72573 00:11:01.830 [2024-12-09 14:43:39.827824] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:01.830 14:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 72573 00:11:02.399 [2024-12-09 14:43:40.274663] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:03.780 ************************************ 00:11:03.780 END TEST raid_state_function_test 00:11:03.780 ************************************ 00:11:03.780 00:11:03.780 real 0m12.037s 00:11:03.780 user 0m18.730s 00:11:03.780 sys 0m2.283s 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.780 14:43:41 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:11:03.780 14:43:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:03.780 14:43:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.780 14:43:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:03.780 ************************************ 00:11:03.780 START TEST raid_state_function_test_sb 00:11:03.780 ************************************ 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:03.780 Process raid pid: 73248 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73248 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73248' 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73248 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73248 ']' 00:11:03.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:03.780 14:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.780 [2024-12-09 14:43:41.738659] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:11:03.780 [2024-12-09 14:43:41.738789] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.040 [2024-12-09 14:43:41.927273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.040 [2024-12-09 14:43:42.076340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.300 [2024-12-09 14:43:42.329715] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.300 [2024-12-09 14:43:42.329784] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.559 14:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:04.559 14:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:04.559 14:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:04.559 14:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.559 14:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.559 [2024-12-09 14:43:42.602795] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:04.559 [2024-12-09 14:43:42.602921] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:04.559 [2024-12-09 14:43:42.602942] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:04.559 [2024-12-09 14:43:42.602958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:04.559 [2024-12-09 14:43:42.602967] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:04.559 [2024-12-09 14:43:42.602979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:04.559 [2024-12-09 14:43:42.602988] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:04.559 [2024-12-09 14:43:42.603000] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:04.559 14:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.559 14:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:04.559 14:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.559 14:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.559 14:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:04.559 14:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.559 14:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.559 14:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.559 14:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.559 14:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.559 14:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.559 14:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.560 14:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.560 14:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.560 14:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.560 14:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.560 14:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.560 "name": "Existed_Raid", 00:11:04.560 "uuid": "78f7838a-8a59-4376-b2e4-2348aac55a9c", 00:11:04.560 "strip_size_kb": 64, 00:11:04.560 "state": "configuring", 00:11:04.560 "raid_level": "concat", 00:11:04.560 "superblock": true, 00:11:04.560 "num_base_bdevs": 4, 00:11:04.560 "num_base_bdevs_discovered": 0, 00:11:04.560 "num_base_bdevs_operational": 4, 00:11:04.560 "base_bdevs_list": [ 00:11:04.560 { 00:11:04.560 "name": "BaseBdev1", 00:11:04.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.560 "is_configured": false, 00:11:04.560 "data_offset": 0, 00:11:04.560 "data_size": 0 00:11:04.560 }, 00:11:04.560 { 00:11:04.560 "name": "BaseBdev2", 00:11:04.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.560 "is_configured": false, 00:11:04.560 "data_offset": 0, 00:11:04.560 "data_size": 0 00:11:04.560 }, 00:11:04.560 { 00:11:04.560 "name": "BaseBdev3", 00:11:04.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.560 "is_configured": false, 00:11:04.560 "data_offset": 0, 00:11:04.560 "data_size": 0 00:11:04.560 }, 00:11:04.560 { 00:11:04.560 "name": "BaseBdev4", 00:11:04.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.560 "is_configured": false, 00:11:04.560 "data_offset": 0, 00:11:04.560 "data_size": 0 00:11:04.560 } 00:11:04.560 ] 00:11:04.560 }' 00:11:04.560 14:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.560 14:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.129 14:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:05.129 14:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.129 14:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.129 [2024-12-09 14:43:43.061956] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:05.129 [2024-12-09 14:43:43.062134] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:05.129 14:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.129 14:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:05.129 14:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.129 14:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.129 [2024-12-09 14:43:43.073910] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:05.129 [2024-12-09 14:43:43.073979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:05.129 [2024-12-09 14:43:43.073991] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:05.129 [2024-12-09 14:43:43.074005] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:05.129 [2024-12-09 14:43:43.074013] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:05.129 [2024-12-09 14:43:43.074026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:05.129 [2024-12-09 14:43:43.074034] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:05.129 [2024-12-09 14:43:43.074046] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:05.129 14:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.129 14:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:05.129 14:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.129 14:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.129 [2024-12-09 14:43:43.135595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:05.129 BaseBdev1 00:11:05.129 14:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.129 14:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:05.129 14:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:05.129 14:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:05.129 14:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:05.129 14:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:05.129 14:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:05.129 14:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:05.129 14:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.129 14:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.129 14:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.129 14:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:05.129 14:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.129 14:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.129 [ 00:11:05.129 { 00:11:05.129 "name": "BaseBdev1", 00:11:05.129 "aliases": [ 00:11:05.129 "ef057958-ade1-4d87-a86c-4d833b9ae258" 00:11:05.129 ], 00:11:05.129 "product_name": "Malloc disk", 00:11:05.129 "block_size": 512, 00:11:05.129 "num_blocks": 65536, 00:11:05.129 "uuid": "ef057958-ade1-4d87-a86c-4d833b9ae258", 00:11:05.129 "assigned_rate_limits": { 00:11:05.129 "rw_ios_per_sec": 0, 00:11:05.129 "rw_mbytes_per_sec": 0, 00:11:05.129 "r_mbytes_per_sec": 0, 00:11:05.129 "w_mbytes_per_sec": 0 00:11:05.129 }, 00:11:05.129 "claimed": true, 00:11:05.129 "claim_type": "exclusive_write", 00:11:05.129 "zoned": false, 00:11:05.129 "supported_io_types": { 00:11:05.129 "read": true, 00:11:05.129 "write": true, 00:11:05.129 "unmap": true, 00:11:05.129 "flush": true, 00:11:05.129 "reset": true, 00:11:05.129 "nvme_admin": false, 00:11:05.129 "nvme_io": false, 00:11:05.129 "nvme_io_md": false, 00:11:05.129 "write_zeroes": true, 00:11:05.129 "zcopy": true, 00:11:05.129 "get_zone_info": false, 00:11:05.129 "zone_management": false, 00:11:05.129 "zone_append": false, 00:11:05.129 "compare": false, 00:11:05.129 "compare_and_write": false, 00:11:05.129 "abort": true, 00:11:05.129 "seek_hole": false, 00:11:05.129 "seek_data": false, 00:11:05.129 "copy": true, 00:11:05.129 "nvme_iov_md": false 00:11:05.129 }, 00:11:05.130 "memory_domains": [ 00:11:05.130 { 00:11:05.130 "dma_device_id": "system", 00:11:05.130 "dma_device_type": 1 00:11:05.130 }, 00:11:05.130 { 00:11:05.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.130 "dma_device_type": 2 00:11:05.130 } 00:11:05.130 ], 00:11:05.130 "driver_specific": {} 00:11:05.130 } 00:11:05.130 ] 00:11:05.130 14:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.130 14:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:05.130 14:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:05.130 14:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.130 14:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.130 14:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.130 14:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.130 14:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.130 14:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.130 14:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.130 14:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.130 14:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.130 14:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.130 14:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.130 14:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.130 14:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.130 14:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.130 14:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.130 "name": "Existed_Raid", 00:11:05.130 "uuid": "63a9f87f-74a7-4711-8c61-c198c701b819", 00:11:05.130 "strip_size_kb": 64, 00:11:05.130 "state": "configuring", 00:11:05.130 "raid_level": "concat", 00:11:05.130 "superblock": true, 00:11:05.130 "num_base_bdevs": 4, 00:11:05.130 "num_base_bdevs_discovered": 1, 00:11:05.130 "num_base_bdevs_operational": 4, 00:11:05.130 "base_bdevs_list": [ 00:11:05.130 { 00:11:05.130 "name": "BaseBdev1", 00:11:05.130 "uuid": "ef057958-ade1-4d87-a86c-4d833b9ae258", 00:11:05.130 "is_configured": true, 00:11:05.130 "data_offset": 2048, 00:11:05.130 "data_size": 63488 00:11:05.130 }, 00:11:05.130 { 00:11:05.130 "name": "BaseBdev2", 00:11:05.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.130 "is_configured": false, 00:11:05.130 "data_offset": 0, 00:11:05.130 "data_size": 0 00:11:05.130 }, 00:11:05.130 { 00:11:05.130 "name": "BaseBdev3", 00:11:05.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.130 "is_configured": false, 00:11:05.130 "data_offset": 0, 00:11:05.130 "data_size": 0 00:11:05.130 }, 00:11:05.130 { 00:11:05.130 "name": "BaseBdev4", 00:11:05.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.130 "is_configured": false, 00:11:05.130 "data_offset": 0, 00:11:05.130 "data_size": 0 00:11:05.130 } 00:11:05.130 ] 00:11:05.130 }' 00:11:05.130 14:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.130 14:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.699 14:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:05.699 14:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.699 14:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.699 [2024-12-09 14:43:43.650801] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:05.699 [2024-12-09 14:43:43.650910] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:05.699 14:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.699 14:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:05.699 14:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.699 14:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.699 [2024-12-09 14:43:43.662834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:05.699 [2024-12-09 14:43:43.665129] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:05.699 [2024-12-09 14:43:43.665228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:05.699 [2024-12-09 14:43:43.665266] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:05.699 [2024-12-09 14:43:43.665298] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:05.699 [2024-12-09 14:43:43.665322] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:05.699 [2024-12-09 14:43:43.665350] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:05.699 14:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.699 14:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:05.699 14:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:05.699 14:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:05.699 14:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.699 14:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.699 14:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.699 14:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.699 14:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.699 14:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.699 14:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.699 14:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.699 14:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.699 14:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.699 14:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.699 14:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.699 14:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.699 14:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.699 14:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.699 "name": "Existed_Raid", 00:11:05.699 "uuid": "87ec3832-a613-4b2c-b94c-9ddf8e65a520", 00:11:05.699 "strip_size_kb": 64, 00:11:05.699 "state": "configuring", 00:11:05.699 "raid_level": "concat", 00:11:05.699 "superblock": true, 00:11:05.699 "num_base_bdevs": 4, 00:11:05.699 "num_base_bdevs_discovered": 1, 00:11:05.699 "num_base_bdevs_operational": 4, 00:11:05.699 "base_bdevs_list": [ 00:11:05.699 { 00:11:05.699 "name": "BaseBdev1", 00:11:05.699 "uuid": "ef057958-ade1-4d87-a86c-4d833b9ae258", 00:11:05.699 "is_configured": true, 00:11:05.699 "data_offset": 2048, 00:11:05.699 "data_size": 63488 00:11:05.699 }, 00:11:05.699 { 00:11:05.699 "name": "BaseBdev2", 00:11:05.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.699 "is_configured": false, 00:11:05.699 "data_offset": 0, 00:11:05.699 "data_size": 0 00:11:05.699 }, 00:11:05.699 { 00:11:05.699 "name": "BaseBdev3", 00:11:05.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.699 "is_configured": false, 00:11:05.699 "data_offset": 0, 00:11:05.699 "data_size": 0 00:11:05.699 }, 00:11:05.699 { 00:11:05.699 "name": "BaseBdev4", 00:11:05.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.699 "is_configured": false, 00:11:05.699 "data_offset": 0, 00:11:05.699 "data_size": 0 00:11:05.699 } 00:11:05.699 ] 00:11:05.699 }' 00:11:05.699 14:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.699 14:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.268 14:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:06.268 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.268 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.268 [2024-12-09 14:43:44.171426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:06.268 BaseBdev2 00:11:06.268 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.268 14:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:06.268 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:06.268 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:06.268 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:06.268 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:06.268 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:06.268 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:06.268 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.268 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.268 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.268 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:06.268 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.268 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.268 [ 00:11:06.268 { 00:11:06.268 "name": "BaseBdev2", 00:11:06.268 "aliases": [ 00:11:06.268 "ac65a43c-25bd-4e4f-9c4f-57ab8887a23b" 00:11:06.268 ], 00:11:06.268 "product_name": "Malloc disk", 00:11:06.268 "block_size": 512, 00:11:06.268 "num_blocks": 65536, 00:11:06.268 "uuid": "ac65a43c-25bd-4e4f-9c4f-57ab8887a23b", 00:11:06.268 "assigned_rate_limits": { 00:11:06.268 "rw_ios_per_sec": 0, 00:11:06.268 "rw_mbytes_per_sec": 0, 00:11:06.268 "r_mbytes_per_sec": 0, 00:11:06.268 "w_mbytes_per_sec": 0 00:11:06.268 }, 00:11:06.268 "claimed": true, 00:11:06.268 "claim_type": "exclusive_write", 00:11:06.268 "zoned": false, 00:11:06.268 "supported_io_types": { 00:11:06.268 "read": true, 00:11:06.268 "write": true, 00:11:06.268 "unmap": true, 00:11:06.268 "flush": true, 00:11:06.268 "reset": true, 00:11:06.268 "nvme_admin": false, 00:11:06.268 "nvme_io": false, 00:11:06.268 "nvme_io_md": false, 00:11:06.268 "write_zeroes": true, 00:11:06.268 "zcopy": true, 00:11:06.268 "get_zone_info": false, 00:11:06.268 "zone_management": false, 00:11:06.268 "zone_append": false, 00:11:06.268 "compare": false, 00:11:06.268 "compare_and_write": false, 00:11:06.268 "abort": true, 00:11:06.268 "seek_hole": false, 00:11:06.268 "seek_data": false, 00:11:06.268 "copy": true, 00:11:06.268 "nvme_iov_md": false 00:11:06.268 }, 00:11:06.268 "memory_domains": [ 00:11:06.268 { 00:11:06.268 "dma_device_id": "system", 00:11:06.268 "dma_device_type": 1 00:11:06.268 }, 00:11:06.268 { 00:11:06.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.268 "dma_device_type": 2 00:11:06.268 } 00:11:06.268 ], 00:11:06.268 "driver_specific": {} 00:11:06.269 } 00:11:06.269 ] 00:11:06.269 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.269 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:06.269 14:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:06.269 14:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:06.269 14:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:06.269 14:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.269 14:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.269 14:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:06.269 14:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.269 14:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.269 14:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.269 14:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.269 14:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.269 14:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.269 14:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.269 14:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.269 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.269 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.269 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.269 14:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.269 "name": "Existed_Raid", 00:11:06.269 "uuid": "87ec3832-a613-4b2c-b94c-9ddf8e65a520", 00:11:06.269 "strip_size_kb": 64, 00:11:06.269 "state": "configuring", 00:11:06.269 "raid_level": "concat", 00:11:06.269 "superblock": true, 00:11:06.269 "num_base_bdevs": 4, 00:11:06.269 "num_base_bdevs_discovered": 2, 00:11:06.269 "num_base_bdevs_operational": 4, 00:11:06.269 "base_bdevs_list": [ 00:11:06.269 { 00:11:06.269 "name": "BaseBdev1", 00:11:06.269 "uuid": "ef057958-ade1-4d87-a86c-4d833b9ae258", 00:11:06.269 "is_configured": true, 00:11:06.269 "data_offset": 2048, 00:11:06.269 "data_size": 63488 00:11:06.269 }, 00:11:06.269 { 00:11:06.269 "name": "BaseBdev2", 00:11:06.269 "uuid": "ac65a43c-25bd-4e4f-9c4f-57ab8887a23b", 00:11:06.269 "is_configured": true, 00:11:06.269 "data_offset": 2048, 00:11:06.269 "data_size": 63488 00:11:06.269 }, 00:11:06.269 { 00:11:06.269 "name": "BaseBdev3", 00:11:06.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.269 "is_configured": false, 00:11:06.269 "data_offset": 0, 00:11:06.269 "data_size": 0 00:11:06.269 }, 00:11:06.269 { 00:11:06.269 "name": "BaseBdev4", 00:11:06.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.269 "is_configured": false, 00:11:06.269 "data_offset": 0, 00:11:06.269 "data_size": 0 00:11:06.269 } 00:11:06.269 ] 00:11:06.269 }' 00:11:06.269 14:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.269 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.529 14:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:06.529 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.529 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.788 [2024-12-09 14:43:44.672117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:06.788 BaseBdev3 00:11:06.788 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.788 14:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:06.788 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:06.788 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:06.788 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:06.788 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:06.788 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:06.788 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:06.788 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.788 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.788 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.788 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:06.788 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.788 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.788 [ 00:11:06.788 { 00:11:06.788 "name": "BaseBdev3", 00:11:06.788 "aliases": [ 00:11:06.788 "34c96c69-af06-4acf-b774-2717a97297e3" 00:11:06.788 ], 00:11:06.788 "product_name": "Malloc disk", 00:11:06.788 "block_size": 512, 00:11:06.788 "num_blocks": 65536, 00:11:06.788 "uuid": "34c96c69-af06-4acf-b774-2717a97297e3", 00:11:06.788 "assigned_rate_limits": { 00:11:06.788 "rw_ios_per_sec": 0, 00:11:06.788 "rw_mbytes_per_sec": 0, 00:11:06.788 "r_mbytes_per_sec": 0, 00:11:06.788 "w_mbytes_per_sec": 0 00:11:06.788 }, 00:11:06.788 "claimed": true, 00:11:06.788 "claim_type": "exclusive_write", 00:11:06.788 "zoned": false, 00:11:06.788 "supported_io_types": { 00:11:06.788 "read": true, 00:11:06.788 "write": true, 00:11:06.788 "unmap": true, 00:11:06.788 "flush": true, 00:11:06.788 "reset": true, 00:11:06.788 "nvme_admin": false, 00:11:06.788 "nvme_io": false, 00:11:06.788 "nvme_io_md": false, 00:11:06.788 "write_zeroes": true, 00:11:06.788 "zcopy": true, 00:11:06.788 "get_zone_info": false, 00:11:06.788 "zone_management": false, 00:11:06.788 "zone_append": false, 00:11:06.788 "compare": false, 00:11:06.788 "compare_and_write": false, 00:11:06.788 "abort": true, 00:11:06.788 "seek_hole": false, 00:11:06.788 "seek_data": false, 00:11:06.788 "copy": true, 00:11:06.788 "nvme_iov_md": false 00:11:06.788 }, 00:11:06.788 "memory_domains": [ 00:11:06.788 { 00:11:06.788 "dma_device_id": "system", 00:11:06.788 "dma_device_type": 1 00:11:06.788 }, 00:11:06.788 { 00:11:06.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.788 "dma_device_type": 2 00:11:06.788 } 00:11:06.788 ], 00:11:06.788 "driver_specific": {} 00:11:06.788 } 00:11:06.788 ] 00:11:06.788 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.788 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:06.788 14:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:06.788 14:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:06.788 14:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:06.788 14:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.788 14:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.788 14:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:06.788 14:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.788 14:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.788 14:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.788 14:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.788 14:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.788 14:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.788 14:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.788 14:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.789 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.789 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.789 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.789 14:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.789 "name": "Existed_Raid", 00:11:06.789 "uuid": "87ec3832-a613-4b2c-b94c-9ddf8e65a520", 00:11:06.789 "strip_size_kb": 64, 00:11:06.789 "state": "configuring", 00:11:06.789 "raid_level": "concat", 00:11:06.789 "superblock": true, 00:11:06.789 "num_base_bdevs": 4, 00:11:06.789 "num_base_bdevs_discovered": 3, 00:11:06.789 "num_base_bdevs_operational": 4, 00:11:06.789 "base_bdevs_list": [ 00:11:06.789 { 00:11:06.789 "name": "BaseBdev1", 00:11:06.789 "uuid": "ef057958-ade1-4d87-a86c-4d833b9ae258", 00:11:06.789 "is_configured": true, 00:11:06.789 "data_offset": 2048, 00:11:06.789 "data_size": 63488 00:11:06.789 }, 00:11:06.789 { 00:11:06.789 "name": "BaseBdev2", 00:11:06.789 "uuid": "ac65a43c-25bd-4e4f-9c4f-57ab8887a23b", 00:11:06.789 "is_configured": true, 00:11:06.789 "data_offset": 2048, 00:11:06.789 "data_size": 63488 00:11:06.789 }, 00:11:06.789 { 00:11:06.789 "name": "BaseBdev3", 00:11:06.789 "uuid": "34c96c69-af06-4acf-b774-2717a97297e3", 00:11:06.789 "is_configured": true, 00:11:06.789 "data_offset": 2048, 00:11:06.789 "data_size": 63488 00:11:06.789 }, 00:11:06.789 { 00:11:06.789 "name": "BaseBdev4", 00:11:06.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.789 "is_configured": false, 00:11:06.789 "data_offset": 0, 00:11:06.789 "data_size": 0 00:11:06.789 } 00:11:06.789 ] 00:11:06.789 }' 00:11:06.789 14:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.789 14:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.051 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:07.051 14:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.051 14:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.311 [2024-12-09 14:43:45.176018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:07.311 [2024-12-09 14:43:45.176560] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:07.311 [2024-12-09 14:43:45.176643] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:07.311 [2024-12-09 14:43:45.177025] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:07.311 [2024-12-09 14:43:45.177272] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:07.311 BaseBdev4 00:11:07.311 [2024-12-09 14:43:45.177339] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:07.311 [2024-12-09 14:43:45.177618] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:07.311 14:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.311 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:07.311 14:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:07.311 14:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:07.311 14:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:07.311 14:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:07.311 14:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:07.311 14:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:07.311 14:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.311 14:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.311 14:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.311 14:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:07.311 14:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.311 14:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.311 [ 00:11:07.311 { 00:11:07.311 "name": "BaseBdev4", 00:11:07.311 "aliases": [ 00:11:07.311 "92732ad5-ab50-4fc5-8f75-17b6e13ac2c4" 00:11:07.311 ], 00:11:07.311 "product_name": "Malloc disk", 00:11:07.311 "block_size": 512, 00:11:07.311 "num_blocks": 65536, 00:11:07.311 "uuid": "92732ad5-ab50-4fc5-8f75-17b6e13ac2c4", 00:11:07.311 "assigned_rate_limits": { 00:11:07.311 "rw_ios_per_sec": 0, 00:11:07.311 "rw_mbytes_per_sec": 0, 00:11:07.311 "r_mbytes_per_sec": 0, 00:11:07.311 "w_mbytes_per_sec": 0 00:11:07.311 }, 00:11:07.311 "claimed": true, 00:11:07.311 "claim_type": "exclusive_write", 00:11:07.311 "zoned": false, 00:11:07.311 "supported_io_types": { 00:11:07.311 "read": true, 00:11:07.311 "write": true, 00:11:07.311 "unmap": true, 00:11:07.311 "flush": true, 00:11:07.311 "reset": true, 00:11:07.311 "nvme_admin": false, 00:11:07.311 "nvme_io": false, 00:11:07.311 "nvme_io_md": false, 00:11:07.311 "write_zeroes": true, 00:11:07.311 "zcopy": true, 00:11:07.311 "get_zone_info": false, 00:11:07.311 "zone_management": false, 00:11:07.311 "zone_append": false, 00:11:07.311 "compare": false, 00:11:07.311 "compare_and_write": false, 00:11:07.311 "abort": true, 00:11:07.311 "seek_hole": false, 00:11:07.311 "seek_data": false, 00:11:07.311 "copy": true, 00:11:07.311 "nvme_iov_md": false 00:11:07.311 }, 00:11:07.311 "memory_domains": [ 00:11:07.311 { 00:11:07.311 "dma_device_id": "system", 00:11:07.311 "dma_device_type": 1 00:11:07.311 }, 00:11:07.311 { 00:11:07.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.311 "dma_device_type": 2 00:11:07.311 } 00:11:07.311 ], 00:11:07.311 "driver_specific": {} 00:11:07.311 } 00:11:07.311 ] 00:11:07.311 14:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.311 14:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:07.311 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:07.311 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:07.311 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:07.311 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.311 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:07.311 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:07.311 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.311 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.311 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.311 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.311 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.311 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.311 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.311 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.311 14:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.311 14:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.311 14:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.311 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.311 "name": "Existed_Raid", 00:11:07.311 "uuid": "87ec3832-a613-4b2c-b94c-9ddf8e65a520", 00:11:07.311 "strip_size_kb": 64, 00:11:07.311 "state": "online", 00:11:07.311 "raid_level": "concat", 00:11:07.311 "superblock": true, 00:11:07.311 "num_base_bdevs": 4, 00:11:07.311 "num_base_bdevs_discovered": 4, 00:11:07.311 "num_base_bdevs_operational": 4, 00:11:07.311 "base_bdevs_list": [ 00:11:07.311 { 00:11:07.311 "name": "BaseBdev1", 00:11:07.311 "uuid": "ef057958-ade1-4d87-a86c-4d833b9ae258", 00:11:07.311 "is_configured": true, 00:11:07.311 "data_offset": 2048, 00:11:07.311 "data_size": 63488 00:11:07.311 }, 00:11:07.311 { 00:11:07.311 "name": "BaseBdev2", 00:11:07.311 "uuid": "ac65a43c-25bd-4e4f-9c4f-57ab8887a23b", 00:11:07.311 "is_configured": true, 00:11:07.311 "data_offset": 2048, 00:11:07.311 "data_size": 63488 00:11:07.311 }, 00:11:07.311 { 00:11:07.311 "name": "BaseBdev3", 00:11:07.311 "uuid": "34c96c69-af06-4acf-b774-2717a97297e3", 00:11:07.311 "is_configured": true, 00:11:07.311 "data_offset": 2048, 00:11:07.311 "data_size": 63488 00:11:07.311 }, 00:11:07.311 { 00:11:07.311 "name": "BaseBdev4", 00:11:07.311 "uuid": "92732ad5-ab50-4fc5-8f75-17b6e13ac2c4", 00:11:07.311 "is_configured": true, 00:11:07.311 "data_offset": 2048, 00:11:07.311 "data_size": 63488 00:11:07.311 } 00:11:07.311 ] 00:11:07.311 }' 00:11:07.311 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.311 14:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.881 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:07.881 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:07.881 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:07.881 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:07.881 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:07.881 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:07.881 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:07.881 14:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.881 14:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.881 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:07.881 [2024-12-09 14:43:45.707727] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:07.881 14:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.881 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:07.881 "name": "Existed_Raid", 00:11:07.881 "aliases": [ 00:11:07.881 "87ec3832-a613-4b2c-b94c-9ddf8e65a520" 00:11:07.881 ], 00:11:07.881 "product_name": "Raid Volume", 00:11:07.881 "block_size": 512, 00:11:07.881 "num_blocks": 253952, 00:11:07.881 "uuid": "87ec3832-a613-4b2c-b94c-9ddf8e65a520", 00:11:07.881 "assigned_rate_limits": { 00:11:07.881 "rw_ios_per_sec": 0, 00:11:07.881 "rw_mbytes_per_sec": 0, 00:11:07.881 "r_mbytes_per_sec": 0, 00:11:07.881 "w_mbytes_per_sec": 0 00:11:07.881 }, 00:11:07.881 "claimed": false, 00:11:07.881 "zoned": false, 00:11:07.881 "supported_io_types": { 00:11:07.881 "read": true, 00:11:07.881 "write": true, 00:11:07.881 "unmap": true, 00:11:07.881 "flush": true, 00:11:07.881 "reset": true, 00:11:07.881 "nvme_admin": false, 00:11:07.881 "nvme_io": false, 00:11:07.881 "nvme_io_md": false, 00:11:07.881 "write_zeroes": true, 00:11:07.881 "zcopy": false, 00:11:07.881 "get_zone_info": false, 00:11:07.881 "zone_management": false, 00:11:07.881 "zone_append": false, 00:11:07.881 "compare": false, 00:11:07.881 "compare_and_write": false, 00:11:07.881 "abort": false, 00:11:07.881 "seek_hole": false, 00:11:07.881 "seek_data": false, 00:11:07.881 "copy": false, 00:11:07.881 "nvme_iov_md": false 00:11:07.881 }, 00:11:07.881 "memory_domains": [ 00:11:07.881 { 00:11:07.881 "dma_device_id": "system", 00:11:07.881 "dma_device_type": 1 00:11:07.881 }, 00:11:07.881 { 00:11:07.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.881 "dma_device_type": 2 00:11:07.881 }, 00:11:07.881 { 00:11:07.881 "dma_device_id": "system", 00:11:07.881 "dma_device_type": 1 00:11:07.881 }, 00:11:07.881 { 00:11:07.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.881 "dma_device_type": 2 00:11:07.882 }, 00:11:07.882 { 00:11:07.882 "dma_device_id": "system", 00:11:07.882 "dma_device_type": 1 00:11:07.882 }, 00:11:07.882 { 00:11:07.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.882 "dma_device_type": 2 00:11:07.882 }, 00:11:07.882 { 00:11:07.882 "dma_device_id": "system", 00:11:07.882 "dma_device_type": 1 00:11:07.882 }, 00:11:07.882 { 00:11:07.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.882 "dma_device_type": 2 00:11:07.882 } 00:11:07.882 ], 00:11:07.882 "driver_specific": { 00:11:07.882 "raid": { 00:11:07.882 "uuid": "87ec3832-a613-4b2c-b94c-9ddf8e65a520", 00:11:07.882 "strip_size_kb": 64, 00:11:07.882 "state": "online", 00:11:07.882 "raid_level": "concat", 00:11:07.882 "superblock": true, 00:11:07.882 "num_base_bdevs": 4, 00:11:07.882 "num_base_bdevs_discovered": 4, 00:11:07.882 "num_base_bdevs_operational": 4, 00:11:07.882 "base_bdevs_list": [ 00:11:07.882 { 00:11:07.882 "name": "BaseBdev1", 00:11:07.882 "uuid": "ef057958-ade1-4d87-a86c-4d833b9ae258", 00:11:07.882 "is_configured": true, 00:11:07.882 "data_offset": 2048, 00:11:07.882 "data_size": 63488 00:11:07.882 }, 00:11:07.882 { 00:11:07.882 "name": "BaseBdev2", 00:11:07.882 "uuid": "ac65a43c-25bd-4e4f-9c4f-57ab8887a23b", 00:11:07.882 "is_configured": true, 00:11:07.882 "data_offset": 2048, 00:11:07.882 "data_size": 63488 00:11:07.882 }, 00:11:07.882 { 00:11:07.882 "name": "BaseBdev3", 00:11:07.882 "uuid": "34c96c69-af06-4acf-b774-2717a97297e3", 00:11:07.882 "is_configured": true, 00:11:07.882 "data_offset": 2048, 00:11:07.882 "data_size": 63488 00:11:07.882 }, 00:11:07.882 { 00:11:07.882 "name": "BaseBdev4", 00:11:07.882 "uuid": "92732ad5-ab50-4fc5-8f75-17b6e13ac2c4", 00:11:07.882 "is_configured": true, 00:11:07.882 "data_offset": 2048, 00:11:07.882 "data_size": 63488 00:11:07.882 } 00:11:07.882 ] 00:11:07.882 } 00:11:07.882 } 00:11:07.882 }' 00:11:07.882 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:07.882 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:07.882 BaseBdev2 00:11:07.882 BaseBdev3 00:11:07.882 BaseBdev4' 00:11:07.882 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.882 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:07.882 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.882 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.882 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:07.882 14:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.882 14:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.882 14:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.882 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.882 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.882 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.882 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:07.882 14:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.882 14:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.882 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.882 14:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.882 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.882 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.882 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.882 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:07.882 14:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.882 14:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.882 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.882 14:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.882 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.882 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.882 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.882 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:07.882 14:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.882 14:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.882 14:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.142 14:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.142 14:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.142 14:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.142 14:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:08.142 14:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.142 14:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.142 [2024-12-09 14:43:46.038962] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:08.142 [2024-12-09 14:43:46.039139] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:08.142 [2024-12-09 14:43:46.039221] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:08.142 14:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.142 14:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:08.142 14:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:08.142 14:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:08.142 14:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:08.142 14:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:08.142 14:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:08.142 14:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.142 14:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:08.142 14:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:08.142 14:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.142 14:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:08.142 14:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.142 14:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.142 14:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.142 14:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.142 14:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.142 14:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.142 14:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.142 14:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.142 14:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.142 14:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.142 "name": "Existed_Raid", 00:11:08.142 "uuid": "87ec3832-a613-4b2c-b94c-9ddf8e65a520", 00:11:08.142 "strip_size_kb": 64, 00:11:08.142 "state": "offline", 00:11:08.142 "raid_level": "concat", 00:11:08.142 "superblock": true, 00:11:08.142 "num_base_bdevs": 4, 00:11:08.142 "num_base_bdevs_discovered": 3, 00:11:08.142 "num_base_bdevs_operational": 3, 00:11:08.142 "base_bdevs_list": [ 00:11:08.142 { 00:11:08.142 "name": null, 00:11:08.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.142 "is_configured": false, 00:11:08.142 "data_offset": 0, 00:11:08.142 "data_size": 63488 00:11:08.142 }, 00:11:08.142 { 00:11:08.142 "name": "BaseBdev2", 00:11:08.142 "uuid": "ac65a43c-25bd-4e4f-9c4f-57ab8887a23b", 00:11:08.142 "is_configured": true, 00:11:08.142 "data_offset": 2048, 00:11:08.142 "data_size": 63488 00:11:08.142 }, 00:11:08.142 { 00:11:08.142 "name": "BaseBdev3", 00:11:08.142 "uuid": "34c96c69-af06-4acf-b774-2717a97297e3", 00:11:08.142 "is_configured": true, 00:11:08.142 "data_offset": 2048, 00:11:08.142 "data_size": 63488 00:11:08.142 }, 00:11:08.142 { 00:11:08.142 "name": "BaseBdev4", 00:11:08.142 "uuid": "92732ad5-ab50-4fc5-8f75-17b6e13ac2c4", 00:11:08.142 "is_configured": true, 00:11:08.142 "data_offset": 2048, 00:11:08.142 "data_size": 63488 00:11:08.142 } 00:11:08.142 ] 00:11:08.142 }' 00:11:08.142 14:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.142 14:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.711 14:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:08.711 14:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:08.711 14:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.711 14:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:08.711 14:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.711 14:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.711 14:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.711 14:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:08.711 14:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:08.711 14:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:08.711 14:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.711 14:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.711 [2024-12-09 14:43:46.700406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:08.711 14:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.711 14:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:08.711 14:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:08.711 14:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.711 14:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.711 14:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:08.711 14:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.971 14:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.971 14:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:08.971 14:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:08.971 14:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:08.971 14:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.971 14:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.971 [2024-12-09 14:43:46.867312] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:08.971 14:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.971 14:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:08.971 14:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:08.971 14:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.971 14:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:08.971 14:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.971 14:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.971 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.971 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:08.971 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:08.971 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:08.971 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.971 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.971 [2024-12-09 14:43:47.038846] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:08.971 [2024-12-09 14:43:47.038944] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:09.230 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.230 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:09.230 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:09.230 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.230 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:09.230 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.230 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.230 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.231 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:09.231 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:09.231 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:09.231 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:09.231 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:09.231 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:09.231 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.231 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.231 BaseBdev2 00:11:09.231 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.231 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:09.231 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:09.231 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:09.231 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:09.231 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:09.231 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:09.231 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:09.231 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.231 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.231 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.231 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:09.231 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.231 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.231 [ 00:11:09.231 { 00:11:09.231 "name": "BaseBdev2", 00:11:09.231 "aliases": [ 00:11:09.231 "ddc2f0b4-e15b-4cbd-be3e-0736e2e589a6" 00:11:09.231 ], 00:11:09.231 "product_name": "Malloc disk", 00:11:09.231 "block_size": 512, 00:11:09.231 "num_blocks": 65536, 00:11:09.231 "uuid": "ddc2f0b4-e15b-4cbd-be3e-0736e2e589a6", 00:11:09.231 "assigned_rate_limits": { 00:11:09.231 "rw_ios_per_sec": 0, 00:11:09.231 "rw_mbytes_per_sec": 0, 00:11:09.231 "r_mbytes_per_sec": 0, 00:11:09.231 "w_mbytes_per_sec": 0 00:11:09.231 }, 00:11:09.231 "claimed": false, 00:11:09.231 "zoned": false, 00:11:09.231 "supported_io_types": { 00:11:09.231 "read": true, 00:11:09.231 "write": true, 00:11:09.231 "unmap": true, 00:11:09.231 "flush": true, 00:11:09.231 "reset": true, 00:11:09.231 "nvme_admin": false, 00:11:09.231 "nvme_io": false, 00:11:09.231 "nvme_io_md": false, 00:11:09.231 "write_zeroes": true, 00:11:09.231 "zcopy": true, 00:11:09.231 "get_zone_info": false, 00:11:09.231 "zone_management": false, 00:11:09.231 "zone_append": false, 00:11:09.231 "compare": false, 00:11:09.231 "compare_and_write": false, 00:11:09.231 "abort": true, 00:11:09.231 "seek_hole": false, 00:11:09.231 "seek_data": false, 00:11:09.231 "copy": true, 00:11:09.231 "nvme_iov_md": false 00:11:09.231 }, 00:11:09.231 "memory_domains": [ 00:11:09.231 { 00:11:09.231 "dma_device_id": "system", 00:11:09.231 "dma_device_type": 1 00:11:09.231 }, 00:11:09.231 { 00:11:09.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.231 "dma_device_type": 2 00:11:09.231 } 00:11:09.231 ], 00:11:09.231 "driver_specific": {} 00:11:09.231 } 00:11:09.231 ] 00:11:09.231 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.231 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:09.231 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:09.231 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:09.231 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:09.231 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.231 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.231 BaseBdev3 00:11:09.492 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.492 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:09.492 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:09.492 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:09.492 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:09.492 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:09.492 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:09.492 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:09.492 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.492 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.492 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.492 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:09.492 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.492 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.492 [ 00:11:09.492 { 00:11:09.492 "name": "BaseBdev3", 00:11:09.492 "aliases": [ 00:11:09.492 "580a6f59-cc22-42c0-901c-9347b56f3829" 00:11:09.492 ], 00:11:09.492 "product_name": "Malloc disk", 00:11:09.492 "block_size": 512, 00:11:09.492 "num_blocks": 65536, 00:11:09.492 "uuid": "580a6f59-cc22-42c0-901c-9347b56f3829", 00:11:09.492 "assigned_rate_limits": { 00:11:09.492 "rw_ios_per_sec": 0, 00:11:09.492 "rw_mbytes_per_sec": 0, 00:11:09.492 "r_mbytes_per_sec": 0, 00:11:09.492 "w_mbytes_per_sec": 0 00:11:09.492 }, 00:11:09.492 "claimed": false, 00:11:09.492 "zoned": false, 00:11:09.492 "supported_io_types": { 00:11:09.492 "read": true, 00:11:09.492 "write": true, 00:11:09.492 "unmap": true, 00:11:09.492 "flush": true, 00:11:09.492 "reset": true, 00:11:09.492 "nvme_admin": false, 00:11:09.492 "nvme_io": false, 00:11:09.492 "nvme_io_md": false, 00:11:09.492 "write_zeroes": true, 00:11:09.492 "zcopy": true, 00:11:09.492 "get_zone_info": false, 00:11:09.492 "zone_management": false, 00:11:09.492 "zone_append": false, 00:11:09.492 "compare": false, 00:11:09.492 "compare_and_write": false, 00:11:09.492 "abort": true, 00:11:09.492 "seek_hole": false, 00:11:09.492 "seek_data": false, 00:11:09.492 "copy": true, 00:11:09.492 "nvme_iov_md": false 00:11:09.492 }, 00:11:09.492 "memory_domains": [ 00:11:09.493 { 00:11:09.493 "dma_device_id": "system", 00:11:09.493 "dma_device_type": 1 00:11:09.493 }, 00:11:09.493 { 00:11:09.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.493 "dma_device_type": 2 00:11:09.493 } 00:11:09.493 ], 00:11:09.493 "driver_specific": {} 00:11:09.493 } 00:11:09.493 ] 00:11:09.493 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.493 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:09.493 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:09.493 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:09.493 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:09.493 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.493 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.493 BaseBdev4 00:11:09.493 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.493 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:09.493 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:09.493 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:09.493 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:09.493 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:09.493 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:09.493 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:09.493 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.493 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.493 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.493 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:09.493 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.493 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.493 [ 00:11:09.493 { 00:11:09.493 "name": "BaseBdev4", 00:11:09.493 "aliases": [ 00:11:09.493 "ff018dd3-6fa7-4be9-ad3b-6b08ea12dd50" 00:11:09.493 ], 00:11:09.493 "product_name": "Malloc disk", 00:11:09.493 "block_size": 512, 00:11:09.493 "num_blocks": 65536, 00:11:09.493 "uuid": "ff018dd3-6fa7-4be9-ad3b-6b08ea12dd50", 00:11:09.493 "assigned_rate_limits": { 00:11:09.493 "rw_ios_per_sec": 0, 00:11:09.493 "rw_mbytes_per_sec": 0, 00:11:09.493 "r_mbytes_per_sec": 0, 00:11:09.493 "w_mbytes_per_sec": 0 00:11:09.493 }, 00:11:09.493 "claimed": false, 00:11:09.493 "zoned": false, 00:11:09.493 "supported_io_types": { 00:11:09.493 "read": true, 00:11:09.493 "write": true, 00:11:09.493 "unmap": true, 00:11:09.493 "flush": true, 00:11:09.493 "reset": true, 00:11:09.493 "nvme_admin": false, 00:11:09.493 "nvme_io": false, 00:11:09.493 "nvme_io_md": false, 00:11:09.493 "write_zeroes": true, 00:11:09.493 "zcopy": true, 00:11:09.493 "get_zone_info": false, 00:11:09.493 "zone_management": false, 00:11:09.493 "zone_append": false, 00:11:09.493 "compare": false, 00:11:09.493 "compare_and_write": false, 00:11:09.493 "abort": true, 00:11:09.493 "seek_hole": false, 00:11:09.493 "seek_data": false, 00:11:09.493 "copy": true, 00:11:09.493 "nvme_iov_md": false 00:11:09.493 }, 00:11:09.493 "memory_domains": [ 00:11:09.493 { 00:11:09.493 "dma_device_id": "system", 00:11:09.493 "dma_device_type": 1 00:11:09.493 }, 00:11:09.493 { 00:11:09.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.493 "dma_device_type": 2 00:11:09.493 } 00:11:09.493 ], 00:11:09.493 "driver_specific": {} 00:11:09.493 } 00:11:09.494 ] 00:11:09.494 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.494 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:09.494 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:09.494 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:09.494 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:09.494 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.494 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.494 [2024-12-09 14:43:47.493436] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:09.494 [2024-12-09 14:43:47.493522] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:09.494 [2024-12-09 14:43:47.493559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:09.494 [2024-12-09 14:43:47.495937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:09.494 [2024-12-09 14:43:47.496011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:09.494 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.494 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:09.494 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.494 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.494 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:09.494 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.494 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.494 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.494 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.494 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.494 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.494 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.494 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.494 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.494 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.494 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.494 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.494 "name": "Existed_Raid", 00:11:09.494 "uuid": "fb114a2f-f1f5-4913-9838-c476755d8cc6", 00:11:09.494 "strip_size_kb": 64, 00:11:09.494 "state": "configuring", 00:11:09.494 "raid_level": "concat", 00:11:09.494 "superblock": true, 00:11:09.494 "num_base_bdevs": 4, 00:11:09.494 "num_base_bdevs_discovered": 3, 00:11:09.494 "num_base_bdevs_operational": 4, 00:11:09.494 "base_bdevs_list": [ 00:11:09.494 { 00:11:09.494 "name": "BaseBdev1", 00:11:09.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.494 "is_configured": false, 00:11:09.494 "data_offset": 0, 00:11:09.494 "data_size": 0 00:11:09.494 }, 00:11:09.494 { 00:11:09.494 "name": "BaseBdev2", 00:11:09.494 "uuid": "ddc2f0b4-e15b-4cbd-be3e-0736e2e589a6", 00:11:09.494 "is_configured": true, 00:11:09.494 "data_offset": 2048, 00:11:09.494 "data_size": 63488 00:11:09.494 }, 00:11:09.494 { 00:11:09.494 "name": "BaseBdev3", 00:11:09.494 "uuid": "580a6f59-cc22-42c0-901c-9347b56f3829", 00:11:09.494 "is_configured": true, 00:11:09.495 "data_offset": 2048, 00:11:09.495 "data_size": 63488 00:11:09.495 }, 00:11:09.495 { 00:11:09.495 "name": "BaseBdev4", 00:11:09.495 "uuid": "ff018dd3-6fa7-4be9-ad3b-6b08ea12dd50", 00:11:09.495 "is_configured": true, 00:11:09.495 "data_offset": 2048, 00:11:09.495 "data_size": 63488 00:11:09.495 } 00:11:09.495 ] 00:11:09.495 }' 00:11:09.495 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.495 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.064 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:10.064 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.064 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.064 [2024-12-09 14:43:47.992659] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:10.064 14:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.064 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:10.064 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.064 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.064 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.064 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.064 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.064 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.064 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.064 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.064 14:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.064 14:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.064 14:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.064 14:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.064 14:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.064 14:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.064 14:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.064 "name": "Existed_Raid", 00:11:10.064 "uuid": "fb114a2f-f1f5-4913-9838-c476755d8cc6", 00:11:10.064 "strip_size_kb": 64, 00:11:10.064 "state": "configuring", 00:11:10.064 "raid_level": "concat", 00:11:10.064 "superblock": true, 00:11:10.064 "num_base_bdevs": 4, 00:11:10.064 "num_base_bdevs_discovered": 2, 00:11:10.064 "num_base_bdevs_operational": 4, 00:11:10.064 "base_bdevs_list": [ 00:11:10.064 { 00:11:10.064 "name": "BaseBdev1", 00:11:10.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.064 "is_configured": false, 00:11:10.064 "data_offset": 0, 00:11:10.064 "data_size": 0 00:11:10.064 }, 00:11:10.064 { 00:11:10.064 "name": null, 00:11:10.064 "uuid": "ddc2f0b4-e15b-4cbd-be3e-0736e2e589a6", 00:11:10.064 "is_configured": false, 00:11:10.064 "data_offset": 0, 00:11:10.064 "data_size": 63488 00:11:10.064 }, 00:11:10.064 { 00:11:10.064 "name": "BaseBdev3", 00:11:10.064 "uuid": "580a6f59-cc22-42c0-901c-9347b56f3829", 00:11:10.064 "is_configured": true, 00:11:10.064 "data_offset": 2048, 00:11:10.064 "data_size": 63488 00:11:10.064 }, 00:11:10.064 { 00:11:10.065 "name": "BaseBdev4", 00:11:10.065 "uuid": "ff018dd3-6fa7-4be9-ad3b-6b08ea12dd50", 00:11:10.065 "is_configured": true, 00:11:10.065 "data_offset": 2048, 00:11:10.065 "data_size": 63488 00:11:10.065 } 00:11:10.065 ] 00:11:10.065 }' 00:11:10.065 14:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.065 14:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.635 14:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.635 14:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.635 14:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:10.635 14:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.635 14:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.635 14:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:10.635 14:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:10.635 14:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.635 14:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.635 [2024-12-09 14:43:48.591829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:10.635 BaseBdev1 00:11:10.635 14:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.635 14:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:10.635 14:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:10.635 14:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:10.635 14:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:10.635 14:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:10.635 14:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:10.635 14:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:10.635 14:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.635 14:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.635 14:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.635 14:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:10.635 14:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.635 14:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.635 [ 00:11:10.635 { 00:11:10.635 "name": "BaseBdev1", 00:11:10.635 "aliases": [ 00:11:10.635 "a07ee474-ce9a-43c7-bd2c-4d5a51bb7b5c" 00:11:10.635 ], 00:11:10.635 "product_name": "Malloc disk", 00:11:10.635 "block_size": 512, 00:11:10.635 "num_blocks": 65536, 00:11:10.635 "uuid": "a07ee474-ce9a-43c7-bd2c-4d5a51bb7b5c", 00:11:10.635 "assigned_rate_limits": { 00:11:10.635 "rw_ios_per_sec": 0, 00:11:10.635 "rw_mbytes_per_sec": 0, 00:11:10.635 "r_mbytes_per_sec": 0, 00:11:10.635 "w_mbytes_per_sec": 0 00:11:10.635 }, 00:11:10.635 "claimed": true, 00:11:10.635 "claim_type": "exclusive_write", 00:11:10.635 "zoned": false, 00:11:10.635 "supported_io_types": { 00:11:10.635 "read": true, 00:11:10.635 "write": true, 00:11:10.635 "unmap": true, 00:11:10.635 "flush": true, 00:11:10.635 "reset": true, 00:11:10.635 "nvme_admin": false, 00:11:10.635 "nvme_io": false, 00:11:10.635 "nvme_io_md": false, 00:11:10.635 "write_zeroes": true, 00:11:10.635 "zcopy": true, 00:11:10.635 "get_zone_info": false, 00:11:10.635 "zone_management": false, 00:11:10.635 "zone_append": false, 00:11:10.635 "compare": false, 00:11:10.635 "compare_and_write": false, 00:11:10.635 "abort": true, 00:11:10.635 "seek_hole": false, 00:11:10.635 "seek_data": false, 00:11:10.635 "copy": true, 00:11:10.635 "nvme_iov_md": false 00:11:10.635 }, 00:11:10.635 "memory_domains": [ 00:11:10.635 { 00:11:10.635 "dma_device_id": "system", 00:11:10.635 "dma_device_type": 1 00:11:10.635 }, 00:11:10.635 { 00:11:10.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.635 "dma_device_type": 2 00:11:10.635 } 00:11:10.635 ], 00:11:10.635 "driver_specific": {} 00:11:10.635 } 00:11:10.635 ] 00:11:10.635 14:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.635 14:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:10.635 14:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:10.635 14:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.635 14:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.635 14:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.635 14:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.635 14:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.635 14:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.635 14:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.635 14:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.635 14:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.635 14:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.635 14:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.635 14:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.636 14:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.636 14:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.636 14:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.636 "name": "Existed_Raid", 00:11:10.636 "uuid": "fb114a2f-f1f5-4913-9838-c476755d8cc6", 00:11:10.636 "strip_size_kb": 64, 00:11:10.636 "state": "configuring", 00:11:10.636 "raid_level": "concat", 00:11:10.636 "superblock": true, 00:11:10.636 "num_base_bdevs": 4, 00:11:10.636 "num_base_bdevs_discovered": 3, 00:11:10.636 "num_base_bdevs_operational": 4, 00:11:10.636 "base_bdevs_list": [ 00:11:10.636 { 00:11:10.636 "name": "BaseBdev1", 00:11:10.636 "uuid": "a07ee474-ce9a-43c7-bd2c-4d5a51bb7b5c", 00:11:10.636 "is_configured": true, 00:11:10.636 "data_offset": 2048, 00:11:10.636 "data_size": 63488 00:11:10.636 }, 00:11:10.636 { 00:11:10.636 "name": null, 00:11:10.636 "uuid": "ddc2f0b4-e15b-4cbd-be3e-0736e2e589a6", 00:11:10.636 "is_configured": false, 00:11:10.636 "data_offset": 0, 00:11:10.636 "data_size": 63488 00:11:10.636 }, 00:11:10.636 { 00:11:10.636 "name": "BaseBdev3", 00:11:10.636 "uuid": "580a6f59-cc22-42c0-901c-9347b56f3829", 00:11:10.636 "is_configured": true, 00:11:10.636 "data_offset": 2048, 00:11:10.636 "data_size": 63488 00:11:10.636 }, 00:11:10.636 { 00:11:10.636 "name": "BaseBdev4", 00:11:10.636 "uuid": "ff018dd3-6fa7-4be9-ad3b-6b08ea12dd50", 00:11:10.636 "is_configured": true, 00:11:10.636 "data_offset": 2048, 00:11:10.636 "data_size": 63488 00:11:10.636 } 00:11:10.636 ] 00:11:10.636 }' 00:11:10.636 14:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.636 14:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.204 14:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:11.205 14:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.205 14:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.205 14:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.205 14:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.205 14:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:11.205 14:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:11.205 14:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.205 14:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.205 [2024-12-09 14:43:49.147204] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:11.205 14:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.205 14:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:11.205 14:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.205 14:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.205 14:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:11.205 14:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.205 14:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.205 14:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.205 14:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.205 14:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.205 14:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.205 14:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.205 14:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.205 14:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.205 14:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.205 14:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.205 14:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.205 "name": "Existed_Raid", 00:11:11.205 "uuid": "fb114a2f-f1f5-4913-9838-c476755d8cc6", 00:11:11.205 "strip_size_kb": 64, 00:11:11.205 "state": "configuring", 00:11:11.205 "raid_level": "concat", 00:11:11.205 "superblock": true, 00:11:11.205 "num_base_bdevs": 4, 00:11:11.205 "num_base_bdevs_discovered": 2, 00:11:11.205 "num_base_bdevs_operational": 4, 00:11:11.205 "base_bdevs_list": [ 00:11:11.205 { 00:11:11.205 "name": "BaseBdev1", 00:11:11.205 "uuid": "a07ee474-ce9a-43c7-bd2c-4d5a51bb7b5c", 00:11:11.205 "is_configured": true, 00:11:11.205 "data_offset": 2048, 00:11:11.205 "data_size": 63488 00:11:11.205 }, 00:11:11.205 { 00:11:11.205 "name": null, 00:11:11.205 "uuid": "ddc2f0b4-e15b-4cbd-be3e-0736e2e589a6", 00:11:11.205 "is_configured": false, 00:11:11.205 "data_offset": 0, 00:11:11.205 "data_size": 63488 00:11:11.205 }, 00:11:11.205 { 00:11:11.205 "name": null, 00:11:11.205 "uuid": "580a6f59-cc22-42c0-901c-9347b56f3829", 00:11:11.205 "is_configured": false, 00:11:11.205 "data_offset": 0, 00:11:11.205 "data_size": 63488 00:11:11.205 }, 00:11:11.205 { 00:11:11.205 "name": "BaseBdev4", 00:11:11.205 "uuid": "ff018dd3-6fa7-4be9-ad3b-6b08ea12dd50", 00:11:11.205 "is_configured": true, 00:11:11.205 "data_offset": 2048, 00:11:11.205 "data_size": 63488 00:11:11.205 } 00:11:11.205 ] 00:11:11.205 }' 00:11:11.205 14:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.205 14:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.773 14:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.773 14:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:11.773 14:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.773 14:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.773 14:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.773 14:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:11.773 14:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:11.773 14:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.773 14:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.773 [2024-12-09 14:43:49.674272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:11.773 14:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.773 14:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:11.773 14:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.773 14:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.773 14:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:11.773 14:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.773 14:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.773 14:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.773 14:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.773 14:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.773 14:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.773 14:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.773 14:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.773 14:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.773 14:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.773 14:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.773 14:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.773 "name": "Existed_Raid", 00:11:11.774 "uuid": "fb114a2f-f1f5-4913-9838-c476755d8cc6", 00:11:11.774 "strip_size_kb": 64, 00:11:11.774 "state": "configuring", 00:11:11.774 "raid_level": "concat", 00:11:11.774 "superblock": true, 00:11:11.774 "num_base_bdevs": 4, 00:11:11.774 "num_base_bdevs_discovered": 3, 00:11:11.774 "num_base_bdevs_operational": 4, 00:11:11.774 "base_bdevs_list": [ 00:11:11.774 { 00:11:11.774 "name": "BaseBdev1", 00:11:11.774 "uuid": "a07ee474-ce9a-43c7-bd2c-4d5a51bb7b5c", 00:11:11.774 "is_configured": true, 00:11:11.774 "data_offset": 2048, 00:11:11.774 "data_size": 63488 00:11:11.774 }, 00:11:11.774 { 00:11:11.774 "name": null, 00:11:11.774 "uuid": "ddc2f0b4-e15b-4cbd-be3e-0736e2e589a6", 00:11:11.774 "is_configured": false, 00:11:11.774 "data_offset": 0, 00:11:11.774 "data_size": 63488 00:11:11.774 }, 00:11:11.774 { 00:11:11.774 "name": "BaseBdev3", 00:11:11.774 "uuid": "580a6f59-cc22-42c0-901c-9347b56f3829", 00:11:11.774 "is_configured": true, 00:11:11.774 "data_offset": 2048, 00:11:11.774 "data_size": 63488 00:11:11.774 }, 00:11:11.774 { 00:11:11.774 "name": "BaseBdev4", 00:11:11.774 "uuid": "ff018dd3-6fa7-4be9-ad3b-6b08ea12dd50", 00:11:11.774 "is_configured": true, 00:11:11.774 "data_offset": 2048, 00:11:11.774 "data_size": 63488 00:11:11.774 } 00:11:11.774 ] 00:11:11.774 }' 00:11:11.774 14:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.774 14:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.032 14:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:12.032 14:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.032 14:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.032 14:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.292 14:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.292 14:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:12.292 14:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:12.292 14:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.292 14:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.292 [2024-12-09 14:43:50.181493] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:12.292 14:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.292 14:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:12.292 14:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.292 14:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.292 14:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:12.292 14:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.292 14:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.292 14:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.292 14:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.292 14:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.292 14:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.292 14:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.292 14:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.292 14:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.292 14:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.292 14:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.292 14:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.292 "name": "Existed_Raid", 00:11:12.292 "uuid": "fb114a2f-f1f5-4913-9838-c476755d8cc6", 00:11:12.292 "strip_size_kb": 64, 00:11:12.292 "state": "configuring", 00:11:12.292 "raid_level": "concat", 00:11:12.292 "superblock": true, 00:11:12.292 "num_base_bdevs": 4, 00:11:12.292 "num_base_bdevs_discovered": 2, 00:11:12.292 "num_base_bdevs_operational": 4, 00:11:12.292 "base_bdevs_list": [ 00:11:12.292 { 00:11:12.292 "name": null, 00:11:12.292 "uuid": "a07ee474-ce9a-43c7-bd2c-4d5a51bb7b5c", 00:11:12.293 "is_configured": false, 00:11:12.293 "data_offset": 0, 00:11:12.293 "data_size": 63488 00:11:12.293 }, 00:11:12.293 { 00:11:12.293 "name": null, 00:11:12.293 "uuid": "ddc2f0b4-e15b-4cbd-be3e-0736e2e589a6", 00:11:12.293 "is_configured": false, 00:11:12.293 "data_offset": 0, 00:11:12.293 "data_size": 63488 00:11:12.293 }, 00:11:12.293 { 00:11:12.293 "name": "BaseBdev3", 00:11:12.293 "uuid": "580a6f59-cc22-42c0-901c-9347b56f3829", 00:11:12.293 "is_configured": true, 00:11:12.293 "data_offset": 2048, 00:11:12.293 "data_size": 63488 00:11:12.293 }, 00:11:12.293 { 00:11:12.293 "name": "BaseBdev4", 00:11:12.293 "uuid": "ff018dd3-6fa7-4be9-ad3b-6b08ea12dd50", 00:11:12.293 "is_configured": true, 00:11:12.293 "data_offset": 2048, 00:11:12.293 "data_size": 63488 00:11:12.293 } 00:11:12.293 ] 00:11:12.293 }' 00:11:12.293 14:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.293 14:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.863 14:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.863 14:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:12.863 14:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.863 14:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.863 14:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.863 14:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:12.863 14:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:12.863 14:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.863 14:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.863 [2024-12-09 14:43:50.846460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:12.863 14:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.863 14:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:12.863 14:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.863 14:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.863 14:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:12.863 14:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.863 14:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.863 14:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.863 14:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.863 14:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.863 14:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.863 14:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.863 14:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.863 14:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.863 14:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.863 14:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.863 14:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.863 "name": "Existed_Raid", 00:11:12.863 "uuid": "fb114a2f-f1f5-4913-9838-c476755d8cc6", 00:11:12.863 "strip_size_kb": 64, 00:11:12.863 "state": "configuring", 00:11:12.863 "raid_level": "concat", 00:11:12.863 "superblock": true, 00:11:12.863 "num_base_bdevs": 4, 00:11:12.863 "num_base_bdevs_discovered": 3, 00:11:12.863 "num_base_bdevs_operational": 4, 00:11:12.863 "base_bdevs_list": [ 00:11:12.863 { 00:11:12.863 "name": null, 00:11:12.863 "uuid": "a07ee474-ce9a-43c7-bd2c-4d5a51bb7b5c", 00:11:12.863 "is_configured": false, 00:11:12.863 "data_offset": 0, 00:11:12.863 "data_size": 63488 00:11:12.863 }, 00:11:12.863 { 00:11:12.863 "name": "BaseBdev2", 00:11:12.863 "uuid": "ddc2f0b4-e15b-4cbd-be3e-0736e2e589a6", 00:11:12.863 "is_configured": true, 00:11:12.863 "data_offset": 2048, 00:11:12.863 "data_size": 63488 00:11:12.863 }, 00:11:12.863 { 00:11:12.863 "name": "BaseBdev3", 00:11:12.863 "uuid": "580a6f59-cc22-42c0-901c-9347b56f3829", 00:11:12.863 "is_configured": true, 00:11:12.863 "data_offset": 2048, 00:11:12.863 "data_size": 63488 00:11:12.863 }, 00:11:12.863 { 00:11:12.863 "name": "BaseBdev4", 00:11:12.863 "uuid": "ff018dd3-6fa7-4be9-ad3b-6b08ea12dd50", 00:11:12.863 "is_configured": true, 00:11:12.863 "data_offset": 2048, 00:11:12.863 "data_size": 63488 00:11:12.863 } 00:11:12.863 ] 00:11:12.863 }' 00:11:12.863 14:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.863 14:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.434 14:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.434 14:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:13.434 14:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.434 14:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.434 14:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.434 14:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:13.434 14:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:13.434 14:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.434 14:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.434 14:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.434 14:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.434 14:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a07ee474-ce9a-43c7-bd2c-4d5a51bb7b5c 00:11:13.434 14:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.434 14:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.434 [2024-12-09 14:43:51.449852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:13.434 [2024-12-09 14:43:51.450308] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:13.434 [2024-12-09 14:43:51.450374] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:13.434 [2024-12-09 14:43:51.450780] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:13.434 NewBaseBdev 00:11:13.434 [2024-12-09 14:43:51.451027] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:13.434 [2024-12-09 14:43:51.451050] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:13.434 [2024-12-09 14:43:51.451235] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:13.434 14:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.434 14:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:13.434 14:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:13.434 14:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:13.434 14:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:13.434 14:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:13.434 14:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:13.434 14:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:13.434 14:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.434 14:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.434 14:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.434 14:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:13.434 14:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.434 14:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.434 [ 00:11:13.434 { 00:11:13.434 "name": "NewBaseBdev", 00:11:13.434 "aliases": [ 00:11:13.434 "a07ee474-ce9a-43c7-bd2c-4d5a51bb7b5c" 00:11:13.434 ], 00:11:13.434 "product_name": "Malloc disk", 00:11:13.434 "block_size": 512, 00:11:13.434 "num_blocks": 65536, 00:11:13.434 "uuid": "a07ee474-ce9a-43c7-bd2c-4d5a51bb7b5c", 00:11:13.434 "assigned_rate_limits": { 00:11:13.434 "rw_ios_per_sec": 0, 00:11:13.434 "rw_mbytes_per_sec": 0, 00:11:13.434 "r_mbytes_per_sec": 0, 00:11:13.434 "w_mbytes_per_sec": 0 00:11:13.434 }, 00:11:13.434 "claimed": true, 00:11:13.434 "claim_type": "exclusive_write", 00:11:13.434 "zoned": false, 00:11:13.434 "supported_io_types": { 00:11:13.434 "read": true, 00:11:13.434 "write": true, 00:11:13.434 "unmap": true, 00:11:13.434 "flush": true, 00:11:13.434 "reset": true, 00:11:13.434 "nvme_admin": false, 00:11:13.434 "nvme_io": false, 00:11:13.434 "nvme_io_md": false, 00:11:13.434 "write_zeroes": true, 00:11:13.434 "zcopy": true, 00:11:13.434 "get_zone_info": false, 00:11:13.434 "zone_management": false, 00:11:13.434 "zone_append": false, 00:11:13.434 "compare": false, 00:11:13.434 "compare_and_write": false, 00:11:13.434 "abort": true, 00:11:13.434 "seek_hole": false, 00:11:13.434 "seek_data": false, 00:11:13.434 "copy": true, 00:11:13.434 "nvme_iov_md": false 00:11:13.434 }, 00:11:13.434 "memory_domains": [ 00:11:13.434 { 00:11:13.434 "dma_device_id": "system", 00:11:13.434 "dma_device_type": 1 00:11:13.434 }, 00:11:13.434 { 00:11:13.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.434 "dma_device_type": 2 00:11:13.434 } 00:11:13.434 ], 00:11:13.434 "driver_specific": {} 00:11:13.434 } 00:11:13.434 ] 00:11:13.434 14:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.435 14:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:13.435 14:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:13.435 14:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.435 14:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:13.435 14:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:13.435 14:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.435 14:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.435 14:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.435 14:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.435 14:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.435 14:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.435 14:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.435 14:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.435 14:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.435 14:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.435 14:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.435 14:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.435 "name": "Existed_Raid", 00:11:13.435 "uuid": "fb114a2f-f1f5-4913-9838-c476755d8cc6", 00:11:13.435 "strip_size_kb": 64, 00:11:13.435 "state": "online", 00:11:13.435 "raid_level": "concat", 00:11:13.435 "superblock": true, 00:11:13.435 "num_base_bdevs": 4, 00:11:13.435 "num_base_bdevs_discovered": 4, 00:11:13.435 "num_base_bdevs_operational": 4, 00:11:13.435 "base_bdevs_list": [ 00:11:13.435 { 00:11:13.435 "name": "NewBaseBdev", 00:11:13.435 "uuid": "a07ee474-ce9a-43c7-bd2c-4d5a51bb7b5c", 00:11:13.435 "is_configured": true, 00:11:13.435 "data_offset": 2048, 00:11:13.435 "data_size": 63488 00:11:13.435 }, 00:11:13.435 { 00:11:13.435 "name": "BaseBdev2", 00:11:13.435 "uuid": "ddc2f0b4-e15b-4cbd-be3e-0736e2e589a6", 00:11:13.435 "is_configured": true, 00:11:13.435 "data_offset": 2048, 00:11:13.435 "data_size": 63488 00:11:13.435 }, 00:11:13.435 { 00:11:13.435 "name": "BaseBdev3", 00:11:13.435 "uuid": "580a6f59-cc22-42c0-901c-9347b56f3829", 00:11:13.435 "is_configured": true, 00:11:13.435 "data_offset": 2048, 00:11:13.435 "data_size": 63488 00:11:13.435 }, 00:11:13.435 { 00:11:13.435 "name": "BaseBdev4", 00:11:13.435 "uuid": "ff018dd3-6fa7-4be9-ad3b-6b08ea12dd50", 00:11:13.435 "is_configured": true, 00:11:13.435 "data_offset": 2048, 00:11:13.435 "data_size": 63488 00:11:13.435 } 00:11:13.435 ] 00:11:13.435 }' 00:11:13.435 14:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.435 14:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.006 14:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:14.006 14:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:14.006 14:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:14.006 14:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:14.006 14:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:14.006 14:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:14.006 14:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:14.006 14:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:14.006 14:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.006 14:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.006 [2024-12-09 14:43:51.925684] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:14.006 14:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.006 14:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:14.006 "name": "Existed_Raid", 00:11:14.006 "aliases": [ 00:11:14.006 "fb114a2f-f1f5-4913-9838-c476755d8cc6" 00:11:14.006 ], 00:11:14.006 "product_name": "Raid Volume", 00:11:14.006 "block_size": 512, 00:11:14.006 "num_blocks": 253952, 00:11:14.006 "uuid": "fb114a2f-f1f5-4913-9838-c476755d8cc6", 00:11:14.006 "assigned_rate_limits": { 00:11:14.006 "rw_ios_per_sec": 0, 00:11:14.006 "rw_mbytes_per_sec": 0, 00:11:14.006 "r_mbytes_per_sec": 0, 00:11:14.006 "w_mbytes_per_sec": 0 00:11:14.006 }, 00:11:14.006 "claimed": false, 00:11:14.006 "zoned": false, 00:11:14.006 "supported_io_types": { 00:11:14.006 "read": true, 00:11:14.006 "write": true, 00:11:14.006 "unmap": true, 00:11:14.006 "flush": true, 00:11:14.006 "reset": true, 00:11:14.006 "nvme_admin": false, 00:11:14.006 "nvme_io": false, 00:11:14.006 "nvme_io_md": false, 00:11:14.006 "write_zeroes": true, 00:11:14.006 "zcopy": false, 00:11:14.006 "get_zone_info": false, 00:11:14.006 "zone_management": false, 00:11:14.006 "zone_append": false, 00:11:14.006 "compare": false, 00:11:14.006 "compare_and_write": false, 00:11:14.006 "abort": false, 00:11:14.006 "seek_hole": false, 00:11:14.006 "seek_data": false, 00:11:14.006 "copy": false, 00:11:14.006 "nvme_iov_md": false 00:11:14.006 }, 00:11:14.006 "memory_domains": [ 00:11:14.006 { 00:11:14.006 "dma_device_id": "system", 00:11:14.006 "dma_device_type": 1 00:11:14.006 }, 00:11:14.006 { 00:11:14.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.006 "dma_device_type": 2 00:11:14.006 }, 00:11:14.006 { 00:11:14.006 "dma_device_id": "system", 00:11:14.006 "dma_device_type": 1 00:11:14.006 }, 00:11:14.006 { 00:11:14.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.006 "dma_device_type": 2 00:11:14.006 }, 00:11:14.006 { 00:11:14.006 "dma_device_id": "system", 00:11:14.006 "dma_device_type": 1 00:11:14.006 }, 00:11:14.006 { 00:11:14.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.006 "dma_device_type": 2 00:11:14.006 }, 00:11:14.006 { 00:11:14.006 "dma_device_id": "system", 00:11:14.007 "dma_device_type": 1 00:11:14.007 }, 00:11:14.007 { 00:11:14.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.007 "dma_device_type": 2 00:11:14.007 } 00:11:14.007 ], 00:11:14.007 "driver_specific": { 00:11:14.007 "raid": { 00:11:14.007 "uuid": "fb114a2f-f1f5-4913-9838-c476755d8cc6", 00:11:14.007 "strip_size_kb": 64, 00:11:14.007 "state": "online", 00:11:14.007 "raid_level": "concat", 00:11:14.007 "superblock": true, 00:11:14.007 "num_base_bdevs": 4, 00:11:14.007 "num_base_bdevs_discovered": 4, 00:11:14.007 "num_base_bdevs_operational": 4, 00:11:14.007 "base_bdevs_list": [ 00:11:14.007 { 00:11:14.007 "name": "NewBaseBdev", 00:11:14.007 "uuid": "a07ee474-ce9a-43c7-bd2c-4d5a51bb7b5c", 00:11:14.007 "is_configured": true, 00:11:14.007 "data_offset": 2048, 00:11:14.007 "data_size": 63488 00:11:14.007 }, 00:11:14.007 { 00:11:14.007 "name": "BaseBdev2", 00:11:14.007 "uuid": "ddc2f0b4-e15b-4cbd-be3e-0736e2e589a6", 00:11:14.007 "is_configured": true, 00:11:14.007 "data_offset": 2048, 00:11:14.007 "data_size": 63488 00:11:14.007 }, 00:11:14.007 { 00:11:14.007 "name": "BaseBdev3", 00:11:14.007 "uuid": "580a6f59-cc22-42c0-901c-9347b56f3829", 00:11:14.007 "is_configured": true, 00:11:14.007 "data_offset": 2048, 00:11:14.007 "data_size": 63488 00:11:14.007 }, 00:11:14.007 { 00:11:14.007 "name": "BaseBdev4", 00:11:14.007 "uuid": "ff018dd3-6fa7-4be9-ad3b-6b08ea12dd50", 00:11:14.007 "is_configured": true, 00:11:14.007 "data_offset": 2048, 00:11:14.007 "data_size": 63488 00:11:14.007 } 00:11:14.007 ] 00:11:14.007 } 00:11:14.007 } 00:11:14.007 }' 00:11:14.007 14:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:14.007 14:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:14.007 BaseBdev2 00:11:14.007 BaseBdev3 00:11:14.007 BaseBdev4' 00:11:14.007 14:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.007 14:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:14.007 14:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.007 14:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:14.007 14:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.007 14:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.007 14:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.007 14:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.007 14:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.007 14:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.007 14:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.007 14:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.007 14:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:14.007 14:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.007 14:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.007 14:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.267 14:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.267 14:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.267 14:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.267 14:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.267 14:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:14.267 14:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.267 14:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.267 14:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.267 14:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.267 14:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.267 14:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.267 14:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:14.267 14:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.267 14:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.267 14:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.267 14:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.267 14:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.267 14:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.267 14:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:14.267 14:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.267 14:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.267 [2024-12-09 14:43:52.224706] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:14.267 [2024-12-09 14:43:52.224769] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:14.267 [2024-12-09 14:43:52.224883] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:14.268 [2024-12-09 14:43:52.224975] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:14.268 [2024-12-09 14:43:52.224988] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:14.268 14:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.268 14:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73248 00:11:14.268 14:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73248 ']' 00:11:14.268 14:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73248 00:11:14.268 14:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:14.268 14:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:14.268 14:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73248 00:11:14.268 14:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:14.268 14:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:14.268 14:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73248' 00:11:14.268 killing process with pid 73248 00:11:14.268 14:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73248 00:11:14.268 [2024-12-09 14:43:52.274206] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:14.268 14:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73248 00:11:14.837 [2024-12-09 14:43:52.750956] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:16.224 14:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:16.224 ************************************ 00:11:16.224 END TEST raid_state_function_test_sb 00:11:16.224 ************************************ 00:11:16.224 00:11:16.224 real 0m12.475s 00:11:16.224 user 0m19.374s 00:11:16.224 sys 0m2.361s 00:11:16.224 14:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.224 14:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.224 14:43:54 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:11:16.224 14:43:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:16.224 14:43:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.224 14:43:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:16.224 ************************************ 00:11:16.224 START TEST raid_superblock_test 00:11:16.224 ************************************ 00:11:16.224 14:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:11:16.224 14:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:16.224 14:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:16.224 14:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:16.224 14:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:16.224 14:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:16.224 14:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:16.224 14:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:16.224 14:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:16.224 14:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:16.224 14:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:16.224 14:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:16.224 14:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:16.224 14:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:16.224 14:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:16.224 14:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:16.224 14:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:16.224 14:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=73924 00:11:16.224 14:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:16.224 14:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 73924 00:11:16.224 14:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 73924 ']' 00:11:16.224 14:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.224 14:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:16.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.224 14:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.224 14:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:16.224 14:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.224 [2024-12-09 14:43:54.285004] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:11:16.224 [2024-12-09 14:43:54.285132] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73924 ] 00:11:16.517 [2024-12-09 14:43:54.460334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.517 [2024-12-09 14:43:54.606429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.777 [2024-12-09 14:43:54.855554] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:16.777 [2024-12-09 14:43:54.855813] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:17.036 14:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:17.036 14:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:17.036 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:17.036 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:17.036 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:17.036 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:17.036 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:17.036 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:17.036 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:17.036 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:17.036 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:17.036 14:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.036 14:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.296 malloc1 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.296 [2024-12-09 14:43:55.209237] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:17.296 [2024-12-09 14:43:55.209421] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.296 [2024-12-09 14:43:55.209475] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:17.296 [2024-12-09 14:43:55.209515] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.296 [2024-12-09 14:43:55.212162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.296 [2024-12-09 14:43:55.212257] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:17.296 pt1 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.296 malloc2 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.296 [2024-12-09 14:43:55.278858] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:17.296 [2024-12-09 14:43:55.278964] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.296 [2024-12-09 14:43:55.279006] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:17.296 [2024-12-09 14:43:55.279020] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.296 [2024-12-09 14:43:55.281976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.296 [2024-12-09 14:43:55.282124] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:17.296 pt2 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.296 malloc3 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.296 [2024-12-09 14:43:55.360801] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:17.296 [2024-12-09 14:43:55.360968] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.296 [2024-12-09 14:43:55.361040] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:17.296 [2024-12-09 14:43:55.361089] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.296 [2024-12-09 14:43:55.363924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.296 [2024-12-09 14:43:55.364020] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:17.296 pt3 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.296 14:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.557 malloc4 00:11:17.557 14:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.557 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:17.557 14:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.557 14:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.557 [2024-12-09 14:43:55.431274] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:17.557 [2024-12-09 14:43:55.431462] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.557 [2024-12-09 14:43:55.431497] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:17.557 [2024-12-09 14:43:55.431511] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.557 [2024-12-09 14:43:55.434090] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.557 [2024-12-09 14:43:55.434133] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:17.557 pt4 00:11:17.557 14:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.557 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:17.557 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:17.557 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:17.557 14:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.557 14:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.557 [2024-12-09 14:43:55.443332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:17.557 [2024-12-09 14:43:55.445638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:17.557 [2024-12-09 14:43:55.445849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:17.557 [2024-12-09 14:43:55.445918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:17.557 [2024-12-09 14:43:55.446150] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:17.557 [2024-12-09 14:43:55.446164] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:17.557 [2024-12-09 14:43:55.446505] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:17.557 [2024-12-09 14:43:55.446755] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:17.557 [2024-12-09 14:43:55.446772] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:17.557 [2024-12-09 14:43:55.446999] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.557 14:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.557 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:17.557 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:17.557 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:17.557 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:17.557 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.557 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.557 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.557 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.557 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.557 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.557 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.557 14:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.557 14:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.557 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.557 14:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.557 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.557 "name": "raid_bdev1", 00:11:17.557 "uuid": "d454b12b-6552-4960-b464-2d32b76a3a4c", 00:11:17.557 "strip_size_kb": 64, 00:11:17.557 "state": "online", 00:11:17.557 "raid_level": "concat", 00:11:17.557 "superblock": true, 00:11:17.557 "num_base_bdevs": 4, 00:11:17.557 "num_base_bdevs_discovered": 4, 00:11:17.557 "num_base_bdevs_operational": 4, 00:11:17.557 "base_bdevs_list": [ 00:11:17.557 { 00:11:17.557 "name": "pt1", 00:11:17.557 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:17.557 "is_configured": true, 00:11:17.557 "data_offset": 2048, 00:11:17.557 "data_size": 63488 00:11:17.557 }, 00:11:17.557 { 00:11:17.557 "name": "pt2", 00:11:17.557 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:17.557 "is_configured": true, 00:11:17.557 "data_offset": 2048, 00:11:17.557 "data_size": 63488 00:11:17.557 }, 00:11:17.557 { 00:11:17.557 "name": "pt3", 00:11:17.557 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:17.557 "is_configured": true, 00:11:17.557 "data_offset": 2048, 00:11:17.557 "data_size": 63488 00:11:17.557 }, 00:11:17.557 { 00:11:17.557 "name": "pt4", 00:11:17.557 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:17.557 "is_configured": true, 00:11:17.557 "data_offset": 2048, 00:11:17.557 "data_size": 63488 00:11:17.557 } 00:11:17.557 ] 00:11:17.557 }' 00:11:17.557 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.557 14:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.817 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:17.817 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:17.817 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:17.817 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:17.817 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:17.817 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:17.817 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:17.817 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:17.817 14:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.817 14:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.817 [2024-12-09 14:43:55.934949] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:18.077 14:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.077 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:18.077 "name": "raid_bdev1", 00:11:18.077 "aliases": [ 00:11:18.077 "d454b12b-6552-4960-b464-2d32b76a3a4c" 00:11:18.077 ], 00:11:18.077 "product_name": "Raid Volume", 00:11:18.077 "block_size": 512, 00:11:18.077 "num_blocks": 253952, 00:11:18.077 "uuid": "d454b12b-6552-4960-b464-2d32b76a3a4c", 00:11:18.077 "assigned_rate_limits": { 00:11:18.077 "rw_ios_per_sec": 0, 00:11:18.077 "rw_mbytes_per_sec": 0, 00:11:18.077 "r_mbytes_per_sec": 0, 00:11:18.077 "w_mbytes_per_sec": 0 00:11:18.077 }, 00:11:18.077 "claimed": false, 00:11:18.077 "zoned": false, 00:11:18.077 "supported_io_types": { 00:11:18.077 "read": true, 00:11:18.077 "write": true, 00:11:18.077 "unmap": true, 00:11:18.077 "flush": true, 00:11:18.077 "reset": true, 00:11:18.077 "nvme_admin": false, 00:11:18.077 "nvme_io": false, 00:11:18.077 "nvme_io_md": false, 00:11:18.077 "write_zeroes": true, 00:11:18.077 "zcopy": false, 00:11:18.077 "get_zone_info": false, 00:11:18.077 "zone_management": false, 00:11:18.077 "zone_append": false, 00:11:18.077 "compare": false, 00:11:18.077 "compare_and_write": false, 00:11:18.077 "abort": false, 00:11:18.077 "seek_hole": false, 00:11:18.077 "seek_data": false, 00:11:18.077 "copy": false, 00:11:18.077 "nvme_iov_md": false 00:11:18.077 }, 00:11:18.077 "memory_domains": [ 00:11:18.077 { 00:11:18.077 "dma_device_id": "system", 00:11:18.077 "dma_device_type": 1 00:11:18.077 }, 00:11:18.077 { 00:11:18.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.077 "dma_device_type": 2 00:11:18.077 }, 00:11:18.077 { 00:11:18.077 "dma_device_id": "system", 00:11:18.077 "dma_device_type": 1 00:11:18.077 }, 00:11:18.077 { 00:11:18.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.077 "dma_device_type": 2 00:11:18.077 }, 00:11:18.077 { 00:11:18.077 "dma_device_id": "system", 00:11:18.077 "dma_device_type": 1 00:11:18.077 }, 00:11:18.077 { 00:11:18.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.077 "dma_device_type": 2 00:11:18.077 }, 00:11:18.077 { 00:11:18.077 "dma_device_id": "system", 00:11:18.077 "dma_device_type": 1 00:11:18.077 }, 00:11:18.077 { 00:11:18.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.077 "dma_device_type": 2 00:11:18.077 } 00:11:18.077 ], 00:11:18.077 "driver_specific": { 00:11:18.077 "raid": { 00:11:18.077 "uuid": "d454b12b-6552-4960-b464-2d32b76a3a4c", 00:11:18.077 "strip_size_kb": 64, 00:11:18.077 "state": "online", 00:11:18.077 "raid_level": "concat", 00:11:18.077 "superblock": true, 00:11:18.077 "num_base_bdevs": 4, 00:11:18.077 "num_base_bdevs_discovered": 4, 00:11:18.077 "num_base_bdevs_operational": 4, 00:11:18.077 "base_bdevs_list": [ 00:11:18.077 { 00:11:18.077 "name": "pt1", 00:11:18.077 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:18.077 "is_configured": true, 00:11:18.077 "data_offset": 2048, 00:11:18.078 "data_size": 63488 00:11:18.078 }, 00:11:18.078 { 00:11:18.078 "name": "pt2", 00:11:18.078 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:18.078 "is_configured": true, 00:11:18.078 "data_offset": 2048, 00:11:18.078 "data_size": 63488 00:11:18.078 }, 00:11:18.078 { 00:11:18.078 "name": "pt3", 00:11:18.078 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:18.078 "is_configured": true, 00:11:18.078 "data_offset": 2048, 00:11:18.078 "data_size": 63488 00:11:18.078 }, 00:11:18.078 { 00:11:18.078 "name": "pt4", 00:11:18.078 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:18.078 "is_configured": true, 00:11:18.078 "data_offset": 2048, 00:11:18.078 "data_size": 63488 00:11:18.078 } 00:11:18.078 ] 00:11:18.078 } 00:11:18.078 } 00:11:18.078 }' 00:11:18.078 14:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:18.078 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:18.078 pt2 00:11:18.078 pt3 00:11:18.078 pt4' 00:11:18.078 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.078 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:18.078 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.078 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:18.078 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.078 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.078 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.078 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.078 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.078 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.078 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.078 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:18.078 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.078 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.078 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.078 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.078 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.078 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.078 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.078 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:18.078 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.078 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.078 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.078 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.342 [2024-12-09 14:43:56.282222] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d454b12b-6552-4960-b464-2d32b76a3a4c 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d454b12b-6552-4960-b464-2d32b76a3a4c ']' 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.342 [2024-12-09 14:43:56.325832] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:18.342 [2024-12-09 14:43:56.325892] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:18.342 [2024-12-09 14:43:56.326037] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:18.342 [2024-12-09 14:43:56.326130] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:18.342 [2024-12-09 14:43:56.326150] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.342 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.603 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.603 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:18.603 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:18.603 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:18.603 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:18.603 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:18.603 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:18.603 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:18.603 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:18.603 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:18.603 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.603 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.603 [2024-12-09 14:43:56.489634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:18.603 [2024-12-09 14:43:56.492123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:18.603 [2024-12-09 14:43:56.492251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:18.603 [2024-12-09 14:43:56.492319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:18.603 [2024-12-09 14:43:56.492431] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:18.603 [2024-12-09 14:43:56.492565] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:18.603 [2024-12-09 14:43:56.492661] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:18.603 [2024-12-09 14:43:56.492728] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:18.603 [2024-12-09 14:43:56.492788] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:18.603 [2024-12-09 14:43:56.492835] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:18.603 request: 00:11:18.603 { 00:11:18.603 "name": "raid_bdev1", 00:11:18.603 "raid_level": "concat", 00:11:18.603 "base_bdevs": [ 00:11:18.603 "malloc1", 00:11:18.604 "malloc2", 00:11:18.604 "malloc3", 00:11:18.604 "malloc4" 00:11:18.604 ], 00:11:18.604 "strip_size_kb": 64, 00:11:18.604 "superblock": false, 00:11:18.604 "method": "bdev_raid_create", 00:11:18.604 "req_id": 1 00:11:18.604 } 00:11:18.604 Got JSON-RPC error response 00:11:18.604 response: 00:11:18.604 { 00:11:18.604 "code": -17, 00:11:18.604 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:18.604 } 00:11:18.604 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:18.604 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:18.604 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:18.604 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:18.604 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:18.604 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.604 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:18.604 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.604 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.604 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.604 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:18.604 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:18.604 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:18.604 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.604 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.604 [2024-12-09 14:43:56.557449] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:18.604 [2024-12-09 14:43:56.557614] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.604 [2024-12-09 14:43:56.557667] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:18.604 [2024-12-09 14:43:56.557739] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.604 [2024-12-09 14:43:56.560473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.604 [2024-12-09 14:43:56.560611] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:18.604 [2024-12-09 14:43:56.560754] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:18.604 [2024-12-09 14:43:56.560874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:18.604 pt1 00:11:18.604 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.604 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:18.604 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:18.604 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.604 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:18.604 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.604 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.604 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.604 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.604 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.604 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.604 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.604 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.604 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.604 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.604 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.604 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.604 "name": "raid_bdev1", 00:11:18.604 "uuid": "d454b12b-6552-4960-b464-2d32b76a3a4c", 00:11:18.604 "strip_size_kb": 64, 00:11:18.604 "state": "configuring", 00:11:18.604 "raid_level": "concat", 00:11:18.604 "superblock": true, 00:11:18.604 "num_base_bdevs": 4, 00:11:18.604 "num_base_bdevs_discovered": 1, 00:11:18.604 "num_base_bdevs_operational": 4, 00:11:18.604 "base_bdevs_list": [ 00:11:18.604 { 00:11:18.604 "name": "pt1", 00:11:18.604 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:18.604 "is_configured": true, 00:11:18.604 "data_offset": 2048, 00:11:18.604 "data_size": 63488 00:11:18.604 }, 00:11:18.604 { 00:11:18.604 "name": null, 00:11:18.604 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:18.604 "is_configured": false, 00:11:18.604 "data_offset": 2048, 00:11:18.604 "data_size": 63488 00:11:18.604 }, 00:11:18.604 { 00:11:18.604 "name": null, 00:11:18.604 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:18.604 "is_configured": false, 00:11:18.604 "data_offset": 2048, 00:11:18.604 "data_size": 63488 00:11:18.604 }, 00:11:18.604 { 00:11:18.604 "name": null, 00:11:18.604 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:18.604 "is_configured": false, 00:11:18.604 "data_offset": 2048, 00:11:18.604 "data_size": 63488 00:11:18.604 } 00:11:18.604 ] 00:11:18.604 }' 00:11:18.604 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.604 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.173 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:19.173 14:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:19.173 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.173 14:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.173 [2024-12-09 14:43:57.000819] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:19.173 [2024-12-09 14:43:57.001051] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.173 [2024-12-09 14:43:57.001087] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:19.173 [2024-12-09 14:43:57.001103] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.173 [2024-12-09 14:43:57.001785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.173 [2024-12-09 14:43:57.001817] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:19.173 [2024-12-09 14:43:57.001948] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:19.173 [2024-12-09 14:43:57.001983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:19.173 pt2 00:11:19.173 14:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.173 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:19.173 14:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.173 14:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.173 [2024-12-09 14:43:57.008767] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:19.173 14:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.173 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:19.173 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:19.173 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.173 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:19.173 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.173 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.173 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.173 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.173 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.173 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.173 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.173 14:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.173 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.173 14:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.173 14:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.173 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.173 "name": "raid_bdev1", 00:11:19.173 "uuid": "d454b12b-6552-4960-b464-2d32b76a3a4c", 00:11:19.173 "strip_size_kb": 64, 00:11:19.173 "state": "configuring", 00:11:19.173 "raid_level": "concat", 00:11:19.173 "superblock": true, 00:11:19.173 "num_base_bdevs": 4, 00:11:19.173 "num_base_bdevs_discovered": 1, 00:11:19.173 "num_base_bdevs_operational": 4, 00:11:19.173 "base_bdevs_list": [ 00:11:19.173 { 00:11:19.173 "name": "pt1", 00:11:19.173 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:19.173 "is_configured": true, 00:11:19.173 "data_offset": 2048, 00:11:19.173 "data_size": 63488 00:11:19.173 }, 00:11:19.173 { 00:11:19.173 "name": null, 00:11:19.173 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:19.173 "is_configured": false, 00:11:19.173 "data_offset": 0, 00:11:19.173 "data_size": 63488 00:11:19.173 }, 00:11:19.173 { 00:11:19.173 "name": null, 00:11:19.173 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:19.173 "is_configured": false, 00:11:19.173 "data_offset": 2048, 00:11:19.173 "data_size": 63488 00:11:19.173 }, 00:11:19.173 { 00:11:19.173 "name": null, 00:11:19.173 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:19.173 "is_configured": false, 00:11:19.173 "data_offset": 2048, 00:11:19.173 "data_size": 63488 00:11:19.173 } 00:11:19.173 ] 00:11:19.173 }' 00:11:19.173 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.173 14:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.433 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:19.433 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:19.433 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:19.433 14:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.433 14:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.433 [2024-12-09 14:43:57.444048] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:19.433 [2024-12-09 14:43:57.444289] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.433 [2024-12-09 14:43:57.444342] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:19.433 [2024-12-09 14:43:57.444385] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.434 [2024-12-09 14:43:57.445025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.434 [2024-12-09 14:43:57.445100] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:19.434 [2024-12-09 14:43:57.445259] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:19.434 [2024-12-09 14:43:57.445298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:19.434 pt2 00:11:19.434 14:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.434 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:19.434 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:19.434 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:19.434 14:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.434 14:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.434 [2024-12-09 14:43:57.455970] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:19.434 [2024-12-09 14:43:57.456057] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.434 [2024-12-09 14:43:57.456087] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:19.434 [2024-12-09 14:43:57.456100] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.434 [2024-12-09 14:43:57.456702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.434 [2024-12-09 14:43:57.456751] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:19.434 [2024-12-09 14:43:57.456876] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:19.434 [2024-12-09 14:43:57.456931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:19.434 pt3 00:11:19.434 14:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.434 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:19.434 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:19.434 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:19.434 14:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.434 14:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.434 [2024-12-09 14:43:57.467912] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:19.434 [2024-12-09 14:43:57.467991] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.434 [2024-12-09 14:43:57.468020] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:19.434 [2024-12-09 14:43:57.468031] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.434 [2024-12-09 14:43:57.468646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.434 [2024-12-09 14:43:57.468677] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:19.434 [2024-12-09 14:43:57.468800] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:19.434 [2024-12-09 14:43:57.468857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:19.434 [2024-12-09 14:43:57.469051] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:19.434 [2024-12-09 14:43:57.469062] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:19.434 [2024-12-09 14:43:57.469373] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:19.434 [2024-12-09 14:43:57.469610] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:19.434 [2024-12-09 14:43:57.469628] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:19.434 [2024-12-09 14:43:57.469805] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.434 pt4 00:11:19.434 14:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.434 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:19.434 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:19.434 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:19.434 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:19.434 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:19.434 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:19.434 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.434 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.434 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.434 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.434 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.434 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.434 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.434 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.434 14:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.434 14:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.434 14:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.434 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.434 "name": "raid_bdev1", 00:11:19.434 "uuid": "d454b12b-6552-4960-b464-2d32b76a3a4c", 00:11:19.434 "strip_size_kb": 64, 00:11:19.434 "state": "online", 00:11:19.434 "raid_level": "concat", 00:11:19.434 "superblock": true, 00:11:19.434 "num_base_bdevs": 4, 00:11:19.434 "num_base_bdevs_discovered": 4, 00:11:19.434 "num_base_bdevs_operational": 4, 00:11:19.434 "base_bdevs_list": [ 00:11:19.434 { 00:11:19.434 "name": "pt1", 00:11:19.434 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:19.434 "is_configured": true, 00:11:19.434 "data_offset": 2048, 00:11:19.434 "data_size": 63488 00:11:19.434 }, 00:11:19.434 { 00:11:19.434 "name": "pt2", 00:11:19.434 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:19.434 "is_configured": true, 00:11:19.434 "data_offset": 2048, 00:11:19.434 "data_size": 63488 00:11:19.434 }, 00:11:19.434 { 00:11:19.434 "name": "pt3", 00:11:19.434 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:19.434 "is_configured": true, 00:11:19.434 "data_offset": 2048, 00:11:19.434 "data_size": 63488 00:11:19.434 }, 00:11:19.434 { 00:11:19.434 "name": "pt4", 00:11:19.434 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:19.434 "is_configured": true, 00:11:19.434 "data_offset": 2048, 00:11:19.434 "data_size": 63488 00:11:19.434 } 00:11:19.434 ] 00:11:19.434 }' 00:11:19.434 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.434 14:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.003 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:20.003 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:20.003 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:20.003 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:20.003 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:20.003 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:20.003 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:20.003 14:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.003 14:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.003 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:20.003 [2024-12-09 14:43:57.955565] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:20.003 14:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.003 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:20.003 "name": "raid_bdev1", 00:11:20.003 "aliases": [ 00:11:20.003 "d454b12b-6552-4960-b464-2d32b76a3a4c" 00:11:20.003 ], 00:11:20.003 "product_name": "Raid Volume", 00:11:20.003 "block_size": 512, 00:11:20.003 "num_blocks": 253952, 00:11:20.003 "uuid": "d454b12b-6552-4960-b464-2d32b76a3a4c", 00:11:20.003 "assigned_rate_limits": { 00:11:20.003 "rw_ios_per_sec": 0, 00:11:20.003 "rw_mbytes_per_sec": 0, 00:11:20.003 "r_mbytes_per_sec": 0, 00:11:20.003 "w_mbytes_per_sec": 0 00:11:20.003 }, 00:11:20.003 "claimed": false, 00:11:20.003 "zoned": false, 00:11:20.003 "supported_io_types": { 00:11:20.003 "read": true, 00:11:20.003 "write": true, 00:11:20.003 "unmap": true, 00:11:20.003 "flush": true, 00:11:20.003 "reset": true, 00:11:20.003 "nvme_admin": false, 00:11:20.003 "nvme_io": false, 00:11:20.003 "nvme_io_md": false, 00:11:20.003 "write_zeroes": true, 00:11:20.003 "zcopy": false, 00:11:20.003 "get_zone_info": false, 00:11:20.003 "zone_management": false, 00:11:20.003 "zone_append": false, 00:11:20.003 "compare": false, 00:11:20.003 "compare_and_write": false, 00:11:20.003 "abort": false, 00:11:20.003 "seek_hole": false, 00:11:20.003 "seek_data": false, 00:11:20.003 "copy": false, 00:11:20.003 "nvme_iov_md": false 00:11:20.003 }, 00:11:20.003 "memory_domains": [ 00:11:20.003 { 00:11:20.004 "dma_device_id": "system", 00:11:20.004 "dma_device_type": 1 00:11:20.004 }, 00:11:20.004 { 00:11:20.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.004 "dma_device_type": 2 00:11:20.004 }, 00:11:20.004 { 00:11:20.004 "dma_device_id": "system", 00:11:20.004 "dma_device_type": 1 00:11:20.004 }, 00:11:20.004 { 00:11:20.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.004 "dma_device_type": 2 00:11:20.004 }, 00:11:20.004 { 00:11:20.004 "dma_device_id": "system", 00:11:20.004 "dma_device_type": 1 00:11:20.004 }, 00:11:20.004 { 00:11:20.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.004 "dma_device_type": 2 00:11:20.004 }, 00:11:20.004 { 00:11:20.004 "dma_device_id": "system", 00:11:20.004 "dma_device_type": 1 00:11:20.004 }, 00:11:20.004 { 00:11:20.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.004 "dma_device_type": 2 00:11:20.004 } 00:11:20.004 ], 00:11:20.004 "driver_specific": { 00:11:20.004 "raid": { 00:11:20.004 "uuid": "d454b12b-6552-4960-b464-2d32b76a3a4c", 00:11:20.004 "strip_size_kb": 64, 00:11:20.004 "state": "online", 00:11:20.004 "raid_level": "concat", 00:11:20.004 "superblock": true, 00:11:20.004 "num_base_bdevs": 4, 00:11:20.004 "num_base_bdevs_discovered": 4, 00:11:20.004 "num_base_bdevs_operational": 4, 00:11:20.004 "base_bdevs_list": [ 00:11:20.004 { 00:11:20.004 "name": "pt1", 00:11:20.004 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:20.004 "is_configured": true, 00:11:20.004 "data_offset": 2048, 00:11:20.004 "data_size": 63488 00:11:20.004 }, 00:11:20.004 { 00:11:20.004 "name": "pt2", 00:11:20.004 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:20.004 "is_configured": true, 00:11:20.004 "data_offset": 2048, 00:11:20.004 "data_size": 63488 00:11:20.004 }, 00:11:20.004 { 00:11:20.004 "name": "pt3", 00:11:20.004 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:20.004 "is_configured": true, 00:11:20.004 "data_offset": 2048, 00:11:20.004 "data_size": 63488 00:11:20.004 }, 00:11:20.004 { 00:11:20.004 "name": "pt4", 00:11:20.004 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:20.004 "is_configured": true, 00:11:20.004 "data_offset": 2048, 00:11:20.004 "data_size": 63488 00:11:20.004 } 00:11:20.004 ] 00:11:20.004 } 00:11:20.004 } 00:11:20.004 }' 00:11:20.004 14:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:20.004 14:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:20.004 pt2 00:11:20.004 pt3 00:11:20.004 pt4' 00:11:20.004 14:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.004 14:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:20.004 14:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.004 14:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.004 14:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:20.004 14:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.004 14:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.004 14:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.004 14:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.004 14:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.004 14:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.004 14:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.004 14:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:20.004 14:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.004 14:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.264 14:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.264 14:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.264 14:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.264 14:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.264 14:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.264 14:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:20.264 14:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.264 14:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.264 14:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.264 14:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.264 14:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.264 14:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.264 14:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:20.264 14:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.264 14:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.264 14:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.264 14:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.264 14:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.264 14:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.264 14:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:20.264 14:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.264 14:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.264 14:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:20.264 [2024-12-09 14:43:58.235123] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:20.264 14:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.264 14:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d454b12b-6552-4960-b464-2d32b76a3a4c '!=' d454b12b-6552-4960-b464-2d32b76a3a4c ']' 00:11:20.264 14:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:20.264 14:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:20.264 14:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:20.264 14:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 73924 00:11:20.264 14:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 73924 ']' 00:11:20.264 14:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 73924 00:11:20.264 14:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:20.264 14:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:20.264 14:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73924 00:11:20.264 14:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:20.264 14:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:20.264 14:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73924' 00:11:20.264 killing process with pid 73924 00:11:20.264 14:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 73924 00:11:20.264 [2024-12-09 14:43:58.322120] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:20.264 14:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 73924 00:11:20.264 [2024-12-09 14:43:58.322343] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:20.264 [2024-12-09 14:43:58.322448] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:20.264 [2024-12-09 14:43:58.322530] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:20.834 [2024-12-09 14:43:58.809301] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:22.216 14:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:22.217 00:11:22.217 real 0m6.027s 00:11:22.217 user 0m8.354s 00:11:22.217 sys 0m1.088s 00:11:22.217 14:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:22.217 14:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.217 ************************************ 00:11:22.217 END TEST raid_superblock_test 00:11:22.217 ************************************ 00:11:22.217 14:44:00 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:11:22.217 14:44:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:22.217 14:44:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:22.217 14:44:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:22.217 ************************************ 00:11:22.217 START TEST raid_read_error_test 00:11:22.217 ************************************ 00:11:22.217 14:44:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:11:22.217 14:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:22.217 14:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:22.217 14:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:22.217 14:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:22.217 14:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:22.217 14:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:22.217 14:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:22.217 14:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:22.217 14:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:22.217 14:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:22.217 14:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:22.217 14:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:22.217 14:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:22.217 14:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:22.217 14:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:22.217 14:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:22.217 14:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:22.217 14:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:22.217 14:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:22.217 14:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:22.217 14:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:22.217 14:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:22.217 14:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:22.217 14:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:22.217 14:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:22.217 14:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:22.217 14:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:22.217 14:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:22.217 14:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.3GkkGMV3WG 00:11:22.217 14:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74194 00:11:22.217 14:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:22.217 14:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74194 00:11:22.217 14:44:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 74194 ']' 00:11:22.217 14:44:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.217 14:44:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:22.217 14:44:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.217 14:44:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:22.217 14:44:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.477 [2024-12-09 14:44:00.398879] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:11:22.477 [2024-12-09 14:44:00.399142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74194 ] 00:11:22.477 [2024-12-09 14:44:00.577417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.737 [2024-12-09 14:44:00.696284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.997 [2024-12-09 14:44:00.906189] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:22.997 [2024-12-09 14:44:00.906256] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.257 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:23.257 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:23.257 14:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:23.257 14:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:23.257 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.257 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.257 BaseBdev1_malloc 00:11:23.257 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.257 14:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:23.257 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.257 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.257 true 00:11:23.257 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.257 14:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:23.257 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.257 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.257 [2024-12-09 14:44:01.374799] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:23.257 [2024-12-09 14:44:01.374862] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.257 [2024-12-09 14:44:01.374902] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:23.257 [2024-12-09 14:44:01.374915] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.257 [2024-12-09 14:44:01.377316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.257 [2024-12-09 14:44:01.377361] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:23.517 BaseBdev1 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.517 BaseBdev2_malloc 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.517 true 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.517 [2024-12-09 14:44:01.442642] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:23.517 [2024-12-09 14:44:01.442711] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.517 [2024-12-09 14:44:01.442729] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:23.517 [2024-12-09 14:44:01.442742] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.517 [2024-12-09 14:44:01.445202] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.517 [2024-12-09 14:44:01.445260] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:23.517 BaseBdev2 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.517 BaseBdev3_malloc 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.517 true 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.517 [2024-12-09 14:44:01.526120] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:23.517 [2024-12-09 14:44:01.526177] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.517 [2024-12-09 14:44:01.526210] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:23.517 [2024-12-09 14:44:01.526221] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.517 [2024-12-09 14:44:01.528513] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.517 [2024-12-09 14:44:01.528553] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:23.517 BaseBdev3 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.517 BaseBdev4_malloc 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.517 true 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.517 [2024-12-09 14:44:01.594203] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:23.517 [2024-12-09 14:44:01.594263] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.517 [2024-12-09 14:44:01.594281] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:23.517 [2024-12-09 14:44:01.594292] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.517 [2024-12-09 14:44:01.596668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.517 [2024-12-09 14:44:01.596707] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:23.517 BaseBdev4 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.517 [2024-12-09 14:44:01.606250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:23.517 [2024-12-09 14:44:01.608275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:23.517 [2024-12-09 14:44:01.608465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:23.517 [2024-12-09 14:44:01.608563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:23.517 [2024-12-09 14:44:01.608947] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:23.517 [2024-12-09 14:44:01.609015] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:23.517 [2024-12-09 14:44:01.609308] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:23.517 [2024-12-09 14:44:01.609500] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:23.517 [2024-12-09 14:44:01.609512] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:23.517 [2024-12-09 14:44:01.609713] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:23.517 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.518 14:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:23.518 14:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:23.518 14:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:23.518 14:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:23.518 14:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.518 14:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.518 14:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.518 14:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.518 14:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.518 14:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.518 14:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.518 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.518 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.518 14:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.518 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.776 14:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.776 "name": "raid_bdev1", 00:11:23.776 "uuid": "a03a1f1a-fc23-4790-b595-8af5465d606c", 00:11:23.777 "strip_size_kb": 64, 00:11:23.777 "state": "online", 00:11:23.777 "raid_level": "concat", 00:11:23.777 "superblock": true, 00:11:23.777 "num_base_bdevs": 4, 00:11:23.777 "num_base_bdevs_discovered": 4, 00:11:23.777 "num_base_bdevs_operational": 4, 00:11:23.777 "base_bdevs_list": [ 00:11:23.777 { 00:11:23.777 "name": "BaseBdev1", 00:11:23.777 "uuid": "fd128426-632c-5b07-9955-1570639db5ab", 00:11:23.777 "is_configured": true, 00:11:23.777 "data_offset": 2048, 00:11:23.777 "data_size": 63488 00:11:23.777 }, 00:11:23.777 { 00:11:23.777 "name": "BaseBdev2", 00:11:23.777 "uuid": "5f9eeb81-c2f2-5575-817d-2c17bfbb730d", 00:11:23.777 "is_configured": true, 00:11:23.777 "data_offset": 2048, 00:11:23.777 "data_size": 63488 00:11:23.777 }, 00:11:23.777 { 00:11:23.777 "name": "BaseBdev3", 00:11:23.777 "uuid": "e47a9523-81d5-5239-9e39-48a3c5ef679a", 00:11:23.777 "is_configured": true, 00:11:23.777 "data_offset": 2048, 00:11:23.777 "data_size": 63488 00:11:23.777 }, 00:11:23.777 { 00:11:23.777 "name": "BaseBdev4", 00:11:23.777 "uuid": "2ad77ceb-d418-5d65-8f77-5f19b3902f88", 00:11:23.777 "is_configured": true, 00:11:23.777 "data_offset": 2048, 00:11:23.777 "data_size": 63488 00:11:23.777 } 00:11:23.777 ] 00:11:23.777 }' 00:11:23.777 14:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.777 14:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.036 14:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:24.036 14:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:24.036 [2024-12-09 14:44:02.130767] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:24.973 14:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:24.973 14:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.973 14:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.973 14:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.973 14:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:24.973 14:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:24.973 14:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:24.973 14:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:24.973 14:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:24.974 14:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:24.974 14:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:24.974 14:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.974 14:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.974 14:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.974 14:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.974 14:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.974 14:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.974 14:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.974 14:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.974 14:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.974 14:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.974 14:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.233 14:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.233 "name": "raid_bdev1", 00:11:25.233 "uuid": "a03a1f1a-fc23-4790-b595-8af5465d606c", 00:11:25.233 "strip_size_kb": 64, 00:11:25.233 "state": "online", 00:11:25.233 "raid_level": "concat", 00:11:25.233 "superblock": true, 00:11:25.233 "num_base_bdevs": 4, 00:11:25.233 "num_base_bdevs_discovered": 4, 00:11:25.233 "num_base_bdevs_operational": 4, 00:11:25.233 "base_bdevs_list": [ 00:11:25.233 { 00:11:25.233 "name": "BaseBdev1", 00:11:25.233 "uuid": "fd128426-632c-5b07-9955-1570639db5ab", 00:11:25.233 "is_configured": true, 00:11:25.233 "data_offset": 2048, 00:11:25.233 "data_size": 63488 00:11:25.233 }, 00:11:25.233 { 00:11:25.233 "name": "BaseBdev2", 00:11:25.233 "uuid": "5f9eeb81-c2f2-5575-817d-2c17bfbb730d", 00:11:25.233 "is_configured": true, 00:11:25.233 "data_offset": 2048, 00:11:25.233 "data_size": 63488 00:11:25.233 }, 00:11:25.233 { 00:11:25.233 "name": "BaseBdev3", 00:11:25.233 "uuid": "e47a9523-81d5-5239-9e39-48a3c5ef679a", 00:11:25.233 "is_configured": true, 00:11:25.233 "data_offset": 2048, 00:11:25.233 "data_size": 63488 00:11:25.233 }, 00:11:25.233 { 00:11:25.233 "name": "BaseBdev4", 00:11:25.233 "uuid": "2ad77ceb-d418-5d65-8f77-5f19b3902f88", 00:11:25.233 "is_configured": true, 00:11:25.233 "data_offset": 2048, 00:11:25.233 "data_size": 63488 00:11:25.233 } 00:11:25.233 ] 00:11:25.233 }' 00:11:25.233 14:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.233 14:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.493 14:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:25.493 14:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.493 14:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.493 [2024-12-09 14:44:03.475230] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:25.493 [2024-12-09 14:44:03.475342] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:25.493 [2024-12-09 14:44:03.478631] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:25.493 [2024-12-09 14:44:03.478747] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:25.493 [2024-12-09 14:44:03.478818] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:25.493 [2024-12-09 14:44:03.478882] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:25.493 { 00:11:25.493 "results": [ 00:11:25.493 { 00:11:25.493 "job": "raid_bdev1", 00:11:25.493 "core_mask": "0x1", 00:11:25.493 "workload": "randrw", 00:11:25.493 "percentage": 50, 00:11:25.493 "status": "finished", 00:11:25.493 "queue_depth": 1, 00:11:25.493 "io_size": 131072, 00:11:25.493 "runtime": 1.345326, 00:11:25.493 "iops": 14075.398825266144, 00:11:25.493 "mibps": 1759.424853158268, 00:11:25.493 "io_failed": 1, 00:11:25.493 "io_timeout": 0, 00:11:25.493 "avg_latency_us": 98.43971154180963, 00:11:25.493 "min_latency_us": 27.388646288209607, 00:11:25.493 "max_latency_us": 1581.1633187772925 00:11:25.493 } 00:11:25.493 ], 00:11:25.493 "core_count": 1 00:11:25.493 } 00:11:25.493 14:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.493 14:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74194 00:11:25.493 14:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 74194 ']' 00:11:25.493 14:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 74194 00:11:25.493 14:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:25.493 14:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:25.493 14:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74194 00:11:25.493 killing process with pid 74194 00:11:25.493 14:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:25.493 14:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:25.493 14:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74194' 00:11:25.493 14:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 74194 00:11:25.493 [2024-12-09 14:44:03.535050] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:25.493 14:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 74194 00:11:25.752 [2024-12-09 14:44:03.871923] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:27.147 14:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.3GkkGMV3WG 00:11:27.147 14:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:27.147 14:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:27.147 ************************************ 00:11:27.147 END TEST raid_read_error_test 00:11:27.147 ************************************ 00:11:27.147 14:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:11:27.147 14:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:27.147 14:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:27.147 14:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:27.147 14:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:11:27.147 00:11:27.147 real 0m4.862s 00:11:27.147 user 0m5.756s 00:11:27.147 sys 0m0.584s 00:11:27.147 14:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.147 14:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.147 14:44:05 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:11:27.147 14:44:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:27.147 14:44:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.147 14:44:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:27.147 ************************************ 00:11:27.147 START TEST raid_write_error_test 00:11:27.147 ************************************ 00:11:27.147 14:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:11:27.147 14:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:27.147 14:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:27.147 14:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:27.147 14:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:27.147 14:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:27.147 14:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:27.147 14:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:27.147 14:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:27.147 14:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:27.147 14:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:27.147 14:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:27.147 14:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:27.147 14:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:27.147 14:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:27.147 14:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:27.147 14:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:27.147 14:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:27.147 14:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:27.147 14:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:27.147 14:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:27.147 14:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:27.147 14:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:27.147 14:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:27.147 14:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:27.147 14:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:27.147 14:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:27.147 14:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:27.147 14:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:27.147 14:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6m9U5CqN5P 00:11:27.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.147 14:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74340 00:11:27.147 14:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74340 00:11:27.147 14:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 74340 ']' 00:11:27.147 14:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.147 14:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:27.147 14:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.147 14:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:27.147 14:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:27.147 14:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.407 [2024-12-09 14:44:05.318547] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:11:27.407 [2024-12-09 14:44:05.318690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74340 ] 00:11:27.407 [2024-12-09 14:44:05.495183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.665 [2024-12-09 14:44:05.617305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.924 [2024-12-09 14:44:05.831152] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:27.924 [2024-12-09 14:44:05.831224] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:28.183 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:28.184 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:28.184 14:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:28.184 14:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:28.184 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.184 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.184 BaseBdev1_malloc 00:11:28.184 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.184 14:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:28.184 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.184 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.184 true 00:11:28.184 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.184 14:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:28.184 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.184 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.184 [2024-12-09 14:44:06.266965] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:28.184 [2024-12-09 14:44:06.267026] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.184 [2024-12-09 14:44:06.267047] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:28.184 [2024-12-09 14:44:06.267059] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.184 [2024-12-09 14:44:06.269224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.184 [2024-12-09 14:44:06.269268] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:28.184 BaseBdev1 00:11:28.184 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.184 14:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:28.184 14:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:28.184 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.184 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.444 BaseBdev2_malloc 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.444 true 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.444 [2024-12-09 14:44:06.324149] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:28.444 [2024-12-09 14:44:06.324207] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.444 [2024-12-09 14:44:06.324226] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:28.444 [2024-12-09 14:44:06.324237] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.444 [2024-12-09 14:44:06.326456] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.444 [2024-12-09 14:44:06.326538] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:28.444 BaseBdev2 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.444 BaseBdev3_malloc 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.444 true 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.444 [2024-12-09 14:44:06.397863] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:28.444 [2024-12-09 14:44:06.397994] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.444 [2024-12-09 14:44:06.398039] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:28.444 [2024-12-09 14:44:06.398054] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.444 [2024-12-09 14:44:06.400633] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.444 [2024-12-09 14:44:06.400676] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:28.444 BaseBdev3 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.444 BaseBdev4_malloc 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.444 true 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.444 [2024-12-09 14:44:06.458730] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:28.444 [2024-12-09 14:44:06.458786] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.444 [2024-12-09 14:44:06.458806] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:28.444 [2024-12-09 14:44:06.458818] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.444 [2024-12-09 14:44:06.461204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.444 [2024-12-09 14:44:06.461252] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:28.444 BaseBdev4 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.444 [2024-12-09 14:44:06.466808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:28.444 [2024-12-09 14:44:06.468831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:28.444 [2024-12-09 14:44:06.468950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:28.444 [2024-12-09 14:44:06.469058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:28.444 [2024-12-09 14:44:06.469349] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:28.444 [2024-12-09 14:44:06.469408] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:28.444 [2024-12-09 14:44:06.469703] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:28.444 [2024-12-09 14:44:06.469925] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:28.444 [2024-12-09 14:44:06.469972] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:28.444 [2024-12-09 14:44:06.470184] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.444 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.445 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.445 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.445 14:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.445 "name": "raid_bdev1", 00:11:28.445 "uuid": "b3a587bf-d8cc-42cd-a1b9-3474c2883574", 00:11:28.445 "strip_size_kb": 64, 00:11:28.445 "state": "online", 00:11:28.445 "raid_level": "concat", 00:11:28.445 "superblock": true, 00:11:28.445 "num_base_bdevs": 4, 00:11:28.445 "num_base_bdevs_discovered": 4, 00:11:28.445 "num_base_bdevs_operational": 4, 00:11:28.445 "base_bdevs_list": [ 00:11:28.445 { 00:11:28.445 "name": "BaseBdev1", 00:11:28.445 "uuid": "eb768889-8732-55de-8199-3909fc5db6f0", 00:11:28.445 "is_configured": true, 00:11:28.445 "data_offset": 2048, 00:11:28.445 "data_size": 63488 00:11:28.445 }, 00:11:28.445 { 00:11:28.445 "name": "BaseBdev2", 00:11:28.445 "uuid": "24dfec26-5fc6-5ea2-943f-1e9bcd651bce", 00:11:28.445 "is_configured": true, 00:11:28.445 "data_offset": 2048, 00:11:28.445 "data_size": 63488 00:11:28.445 }, 00:11:28.445 { 00:11:28.445 "name": "BaseBdev3", 00:11:28.445 "uuid": "4c756376-5c41-5bdc-88f8-88ce5a3a950d", 00:11:28.445 "is_configured": true, 00:11:28.445 "data_offset": 2048, 00:11:28.445 "data_size": 63488 00:11:28.445 }, 00:11:28.445 { 00:11:28.445 "name": "BaseBdev4", 00:11:28.445 "uuid": "9e83a578-03a2-5ba5-97d8-65e83c417ff9", 00:11:28.445 "is_configured": true, 00:11:28.445 "data_offset": 2048, 00:11:28.445 "data_size": 63488 00:11:28.445 } 00:11:28.445 ] 00:11:28.445 }' 00:11:28.445 14:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.445 14:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.013 14:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:29.014 14:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:29.014 [2024-12-09 14:44:07.007357] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:29.952 14:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:29.952 14:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.952 14:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.952 14:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.952 14:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:29.952 14:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:29.952 14:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:29.952 14:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:29.952 14:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:29.952 14:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:29.952 14:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:29.952 14:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.952 14:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.952 14:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.952 14:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.952 14:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.952 14:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.952 14:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.952 14:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.952 14:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.952 14:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.952 14:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.952 14:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.952 "name": "raid_bdev1", 00:11:29.952 "uuid": "b3a587bf-d8cc-42cd-a1b9-3474c2883574", 00:11:29.952 "strip_size_kb": 64, 00:11:29.952 "state": "online", 00:11:29.952 "raid_level": "concat", 00:11:29.952 "superblock": true, 00:11:29.952 "num_base_bdevs": 4, 00:11:29.952 "num_base_bdevs_discovered": 4, 00:11:29.952 "num_base_bdevs_operational": 4, 00:11:29.952 "base_bdevs_list": [ 00:11:29.952 { 00:11:29.952 "name": "BaseBdev1", 00:11:29.952 "uuid": "eb768889-8732-55de-8199-3909fc5db6f0", 00:11:29.952 "is_configured": true, 00:11:29.952 "data_offset": 2048, 00:11:29.952 "data_size": 63488 00:11:29.952 }, 00:11:29.952 { 00:11:29.952 "name": "BaseBdev2", 00:11:29.952 "uuid": "24dfec26-5fc6-5ea2-943f-1e9bcd651bce", 00:11:29.952 "is_configured": true, 00:11:29.952 "data_offset": 2048, 00:11:29.952 "data_size": 63488 00:11:29.952 }, 00:11:29.952 { 00:11:29.952 "name": "BaseBdev3", 00:11:29.952 "uuid": "4c756376-5c41-5bdc-88f8-88ce5a3a950d", 00:11:29.952 "is_configured": true, 00:11:29.952 "data_offset": 2048, 00:11:29.952 "data_size": 63488 00:11:29.952 }, 00:11:29.952 { 00:11:29.952 "name": "BaseBdev4", 00:11:29.952 "uuid": "9e83a578-03a2-5ba5-97d8-65e83c417ff9", 00:11:29.952 "is_configured": true, 00:11:29.952 "data_offset": 2048, 00:11:29.952 "data_size": 63488 00:11:29.952 } 00:11:29.952 ] 00:11:29.952 }' 00:11:29.952 14:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.952 14:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.520 14:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:30.520 14:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.520 14:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.520 [2024-12-09 14:44:08.416441] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:30.520 [2024-12-09 14:44:08.416589] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:30.520 [2024-12-09 14:44:08.419744] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:30.520 [2024-12-09 14:44:08.419871] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:30.520 [2024-12-09 14:44:08.419947] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:30.520 [2024-12-09 14:44:08.420015] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:30.520 { 00:11:30.520 "results": [ 00:11:30.520 { 00:11:30.520 "job": "raid_bdev1", 00:11:30.520 "core_mask": "0x1", 00:11:30.520 "workload": "randrw", 00:11:30.520 "percentage": 50, 00:11:30.520 "status": "finished", 00:11:30.520 "queue_depth": 1, 00:11:30.520 "io_size": 131072, 00:11:30.520 "runtime": 1.409787, 00:11:30.520 "iops": 14243.995724176773, 00:11:30.520 "mibps": 1780.4994655220967, 00:11:30.520 "io_failed": 1, 00:11:30.520 "io_timeout": 0, 00:11:30.520 "avg_latency_us": 97.39268997111841, 00:11:30.520 "min_latency_us": 27.50043668122271, 00:11:30.520 "max_latency_us": 1502.46288209607 00:11:30.520 } 00:11:30.520 ], 00:11:30.520 "core_count": 1 00:11:30.520 } 00:11:30.520 14:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.520 14:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74340 00:11:30.520 14:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 74340 ']' 00:11:30.520 14:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 74340 00:11:30.520 14:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:30.520 14:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:30.520 14:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74340 00:11:30.520 killing process with pid 74340 00:11:30.520 14:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:30.520 14:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:30.520 14:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74340' 00:11:30.520 14:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 74340 00:11:30.520 [2024-12-09 14:44:08.456039] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:30.520 14:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 74340 00:11:30.778 [2024-12-09 14:44:08.808356] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:32.158 14:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6m9U5CqN5P 00:11:32.158 14:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:32.158 14:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:32.158 14:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:11:32.158 14:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:32.158 14:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:32.158 14:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:32.158 ************************************ 00:11:32.158 END TEST raid_write_error_test 00:11:32.158 ************************************ 00:11:32.158 14:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:11:32.158 00:11:32.158 real 0m4.837s 00:11:32.158 user 0m5.758s 00:11:32.158 sys 0m0.579s 00:11:32.158 14:44:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.158 14:44:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.158 14:44:10 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:32.158 14:44:10 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:11:32.158 14:44:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:32.158 14:44:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:32.158 14:44:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:32.158 ************************************ 00:11:32.158 START TEST raid_state_function_test 00:11:32.158 ************************************ 00:11:32.158 14:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:11:32.158 14:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:32.158 14:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:32.158 14:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:32.158 14:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:32.158 14:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:32.158 14:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.158 14:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:32.158 14:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:32.158 14:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.158 14:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:32.158 14:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:32.158 14:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.158 14:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:32.158 14:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:32.158 14:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.158 14:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:32.158 14:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:32.158 14:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.158 14:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:32.158 14:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:32.158 14:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:32.158 14:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:32.158 14:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:32.158 14:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:32.158 14:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:32.158 14:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:32.158 14:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:32.158 Process raid pid: 74489 00:11:32.158 14:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:32.158 14:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:32.158 14:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=74489 00:11:32.158 14:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74489' 00:11:32.158 14:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 74489 00:11:32.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.158 14:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 74489 ']' 00:11:32.158 14:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.158 14:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:32.158 14:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.158 14:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:32.158 14:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.158 [2024-12-09 14:44:10.229647] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:11:32.158 [2024-12-09 14:44:10.229929] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.417 [2024-12-09 14:44:10.403235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.417 [2024-12-09 14:44:10.523222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.679 [2024-12-09 14:44:10.743134] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:32.679 [2024-12-09 14:44:10.743231] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:33.248 14:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.248 14:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:33.248 14:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:33.248 14:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.248 14:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.248 [2024-12-09 14:44:11.115425] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:33.248 [2024-12-09 14:44:11.115552] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:33.248 [2024-12-09 14:44:11.115616] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:33.248 [2024-12-09 14:44:11.115646] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:33.248 [2024-12-09 14:44:11.115685] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:33.248 [2024-12-09 14:44:11.115712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:33.248 [2024-12-09 14:44:11.115768] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:33.248 [2024-12-09 14:44:11.115793] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:33.248 14:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.248 14:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:33.248 14:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.248 14:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.249 14:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.249 14:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.249 14:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.249 14:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.249 14:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.249 14:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.249 14:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.249 14:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.249 14:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.249 14:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.249 14:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.249 14:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.249 14:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.249 "name": "Existed_Raid", 00:11:33.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.249 "strip_size_kb": 0, 00:11:33.249 "state": "configuring", 00:11:33.249 "raid_level": "raid1", 00:11:33.249 "superblock": false, 00:11:33.249 "num_base_bdevs": 4, 00:11:33.249 "num_base_bdevs_discovered": 0, 00:11:33.249 "num_base_bdevs_operational": 4, 00:11:33.249 "base_bdevs_list": [ 00:11:33.249 { 00:11:33.249 "name": "BaseBdev1", 00:11:33.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.249 "is_configured": false, 00:11:33.249 "data_offset": 0, 00:11:33.249 "data_size": 0 00:11:33.249 }, 00:11:33.249 { 00:11:33.249 "name": "BaseBdev2", 00:11:33.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.249 "is_configured": false, 00:11:33.249 "data_offset": 0, 00:11:33.249 "data_size": 0 00:11:33.249 }, 00:11:33.249 { 00:11:33.249 "name": "BaseBdev3", 00:11:33.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.249 "is_configured": false, 00:11:33.249 "data_offset": 0, 00:11:33.249 "data_size": 0 00:11:33.249 }, 00:11:33.249 { 00:11:33.249 "name": "BaseBdev4", 00:11:33.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.249 "is_configured": false, 00:11:33.249 "data_offset": 0, 00:11:33.249 "data_size": 0 00:11:33.249 } 00:11:33.249 ] 00:11:33.249 }' 00:11:33.249 14:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.249 14:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.508 14:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:33.508 14:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.508 14:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.508 [2024-12-09 14:44:11.598634] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:33.508 [2024-12-09 14:44:11.598735] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:33.508 14:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.508 14:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:33.508 14:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.508 14:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.508 [2024-12-09 14:44:11.610569] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:33.508 [2024-12-09 14:44:11.610679] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:33.508 [2024-12-09 14:44:11.610715] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:33.508 [2024-12-09 14:44:11.610742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:33.508 [2024-12-09 14:44:11.610774] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:33.508 [2024-12-09 14:44:11.610788] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:33.508 [2024-12-09 14:44:11.610796] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:33.508 [2024-12-09 14:44:11.610806] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:33.508 14:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.508 14:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:33.508 14:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.508 14:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.768 [2024-12-09 14:44:11.661066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:33.768 BaseBdev1 00:11:33.768 14:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.768 14:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:33.768 14:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:33.768 14:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:33.768 14:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:33.768 14:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:33.768 14:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:33.768 14:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:33.768 14:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.768 14:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.768 14:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.768 14:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:33.768 14:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.768 14:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.768 [ 00:11:33.768 { 00:11:33.768 "name": "BaseBdev1", 00:11:33.768 "aliases": [ 00:11:33.768 "6e6e1a7b-c7ec-4160-85cd-18e7a548a7aa" 00:11:33.768 ], 00:11:33.768 "product_name": "Malloc disk", 00:11:33.768 "block_size": 512, 00:11:33.768 "num_blocks": 65536, 00:11:33.768 "uuid": "6e6e1a7b-c7ec-4160-85cd-18e7a548a7aa", 00:11:33.768 "assigned_rate_limits": { 00:11:33.768 "rw_ios_per_sec": 0, 00:11:33.768 "rw_mbytes_per_sec": 0, 00:11:33.768 "r_mbytes_per_sec": 0, 00:11:33.768 "w_mbytes_per_sec": 0 00:11:33.768 }, 00:11:33.768 "claimed": true, 00:11:33.768 "claim_type": "exclusive_write", 00:11:33.768 "zoned": false, 00:11:33.768 "supported_io_types": { 00:11:33.768 "read": true, 00:11:33.768 "write": true, 00:11:33.768 "unmap": true, 00:11:33.768 "flush": true, 00:11:33.768 "reset": true, 00:11:33.768 "nvme_admin": false, 00:11:33.768 "nvme_io": false, 00:11:33.768 "nvme_io_md": false, 00:11:33.768 "write_zeroes": true, 00:11:33.768 "zcopy": true, 00:11:33.768 "get_zone_info": false, 00:11:33.768 "zone_management": false, 00:11:33.768 "zone_append": false, 00:11:33.768 "compare": false, 00:11:33.768 "compare_and_write": false, 00:11:33.768 "abort": true, 00:11:33.768 "seek_hole": false, 00:11:33.768 "seek_data": false, 00:11:33.768 "copy": true, 00:11:33.768 "nvme_iov_md": false 00:11:33.768 }, 00:11:33.768 "memory_domains": [ 00:11:33.768 { 00:11:33.768 "dma_device_id": "system", 00:11:33.768 "dma_device_type": 1 00:11:33.768 }, 00:11:33.768 { 00:11:33.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.768 "dma_device_type": 2 00:11:33.768 } 00:11:33.768 ], 00:11:33.768 "driver_specific": {} 00:11:33.768 } 00:11:33.768 ] 00:11:33.768 14:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.768 14:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:33.768 14:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:33.768 14:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.768 14:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.768 14:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.768 14:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.768 14:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.768 14:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.768 14:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.768 14:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.768 14:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.768 14:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.768 14:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.768 14:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.768 14:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.768 14:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.768 14:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.768 "name": "Existed_Raid", 00:11:33.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.768 "strip_size_kb": 0, 00:11:33.768 "state": "configuring", 00:11:33.768 "raid_level": "raid1", 00:11:33.768 "superblock": false, 00:11:33.768 "num_base_bdevs": 4, 00:11:33.768 "num_base_bdevs_discovered": 1, 00:11:33.768 "num_base_bdevs_operational": 4, 00:11:33.768 "base_bdevs_list": [ 00:11:33.768 { 00:11:33.768 "name": "BaseBdev1", 00:11:33.768 "uuid": "6e6e1a7b-c7ec-4160-85cd-18e7a548a7aa", 00:11:33.768 "is_configured": true, 00:11:33.768 "data_offset": 0, 00:11:33.768 "data_size": 65536 00:11:33.768 }, 00:11:33.768 { 00:11:33.768 "name": "BaseBdev2", 00:11:33.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.768 "is_configured": false, 00:11:33.768 "data_offset": 0, 00:11:33.768 "data_size": 0 00:11:33.768 }, 00:11:33.768 { 00:11:33.768 "name": "BaseBdev3", 00:11:33.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.768 "is_configured": false, 00:11:33.768 "data_offset": 0, 00:11:33.768 "data_size": 0 00:11:33.768 }, 00:11:33.768 { 00:11:33.768 "name": "BaseBdev4", 00:11:33.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.768 "is_configured": false, 00:11:33.768 "data_offset": 0, 00:11:33.768 "data_size": 0 00:11:33.768 } 00:11:33.768 ] 00:11:33.768 }' 00:11:33.768 14:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.768 14:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.028 14:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:34.028 14:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.028 14:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.028 [2024-12-09 14:44:12.120343] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:34.028 [2024-12-09 14:44:12.120469] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:34.028 14:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.028 14:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:34.028 14:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.028 14:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.028 [2024-12-09 14:44:12.128363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:34.028 [2024-12-09 14:44:12.130363] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:34.028 [2024-12-09 14:44:12.130407] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:34.028 [2024-12-09 14:44:12.130417] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:34.028 [2024-12-09 14:44:12.130427] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:34.028 [2024-12-09 14:44:12.130434] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:34.028 [2024-12-09 14:44:12.130442] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:34.028 14:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.028 14:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:34.028 14:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:34.028 14:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:34.028 14:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.028 14:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.028 14:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.028 14:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.028 14:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.028 14:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.028 14:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.028 14:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.028 14:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.028 14:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.028 14:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.028 14:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.028 14:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.288 14:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.288 14:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.288 "name": "Existed_Raid", 00:11:34.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.288 "strip_size_kb": 0, 00:11:34.288 "state": "configuring", 00:11:34.288 "raid_level": "raid1", 00:11:34.288 "superblock": false, 00:11:34.288 "num_base_bdevs": 4, 00:11:34.288 "num_base_bdevs_discovered": 1, 00:11:34.288 "num_base_bdevs_operational": 4, 00:11:34.288 "base_bdevs_list": [ 00:11:34.288 { 00:11:34.288 "name": "BaseBdev1", 00:11:34.288 "uuid": "6e6e1a7b-c7ec-4160-85cd-18e7a548a7aa", 00:11:34.288 "is_configured": true, 00:11:34.288 "data_offset": 0, 00:11:34.288 "data_size": 65536 00:11:34.288 }, 00:11:34.288 { 00:11:34.288 "name": "BaseBdev2", 00:11:34.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.288 "is_configured": false, 00:11:34.288 "data_offset": 0, 00:11:34.288 "data_size": 0 00:11:34.288 }, 00:11:34.288 { 00:11:34.288 "name": "BaseBdev3", 00:11:34.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.288 "is_configured": false, 00:11:34.288 "data_offset": 0, 00:11:34.288 "data_size": 0 00:11:34.288 }, 00:11:34.288 { 00:11:34.288 "name": "BaseBdev4", 00:11:34.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.288 "is_configured": false, 00:11:34.288 "data_offset": 0, 00:11:34.288 "data_size": 0 00:11:34.288 } 00:11:34.288 ] 00:11:34.288 }' 00:11:34.288 14:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.288 14:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.548 14:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:34.548 14:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.548 14:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.548 [2024-12-09 14:44:12.632526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:34.548 BaseBdev2 00:11:34.548 14:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.548 14:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:34.548 14:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:34.548 14:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:34.548 14:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:34.548 14:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:34.548 14:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:34.548 14:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:34.548 14:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.548 14:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.548 14:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.548 14:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:34.548 14:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.548 14:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.548 [ 00:11:34.548 { 00:11:34.548 "name": "BaseBdev2", 00:11:34.548 "aliases": [ 00:11:34.548 "f361c411-751f-46de-aed6-073529e57349" 00:11:34.548 ], 00:11:34.548 "product_name": "Malloc disk", 00:11:34.548 "block_size": 512, 00:11:34.548 "num_blocks": 65536, 00:11:34.548 "uuid": "f361c411-751f-46de-aed6-073529e57349", 00:11:34.548 "assigned_rate_limits": { 00:11:34.548 "rw_ios_per_sec": 0, 00:11:34.548 "rw_mbytes_per_sec": 0, 00:11:34.548 "r_mbytes_per_sec": 0, 00:11:34.548 "w_mbytes_per_sec": 0 00:11:34.548 }, 00:11:34.548 "claimed": true, 00:11:34.548 "claim_type": "exclusive_write", 00:11:34.548 "zoned": false, 00:11:34.548 "supported_io_types": { 00:11:34.548 "read": true, 00:11:34.548 "write": true, 00:11:34.548 "unmap": true, 00:11:34.548 "flush": true, 00:11:34.548 "reset": true, 00:11:34.548 "nvme_admin": false, 00:11:34.548 "nvme_io": false, 00:11:34.548 "nvme_io_md": false, 00:11:34.548 "write_zeroes": true, 00:11:34.548 "zcopy": true, 00:11:34.807 "get_zone_info": false, 00:11:34.807 "zone_management": false, 00:11:34.807 "zone_append": false, 00:11:34.807 "compare": false, 00:11:34.807 "compare_and_write": false, 00:11:34.807 "abort": true, 00:11:34.807 "seek_hole": false, 00:11:34.807 "seek_data": false, 00:11:34.807 "copy": true, 00:11:34.807 "nvme_iov_md": false 00:11:34.807 }, 00:11:34.807 "memory_domains": [ 00:11:34.807 { 00:11:34.807 "dma_device_id": "system", 00:11:34.807 "dma_device_type": 1 00:11:34.807 }, 00:11:34.807 { 00:11:34.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.807 "dma_device_type": 2 00:11:34.807 } 00:11:34.807 ], 00:11:34.807 "driver_specific": {} 00:11:34.807 } 00:11:34.807 ] 00:11:34.807 14:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.807 14:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:34.807 14:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:34.807 14:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:34.807 14:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:34.807 14:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.807 14:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.807 14:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.807 14:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.807 14:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.807 14:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.807 14:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.807 14:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.807 14:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.807 14:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.807 14:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.807 14:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.807 14:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.807 14:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.807 14:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.807 "name": "Existed_Raid", 00:11:34.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.807 "strip_size_kb": 0, 00:11:34.807 "state": "configuring", 00:11:34.807 "raid_level": "raid1", 00:11:34.807 "superblock": false, 00:11:34.807 "num_base_bdevs": 4, 00:11:34.807 "num_base_bdevs_discovered": 2, 00:11:34.807 "num_base_bdevs_operational": 4, 00:11:34.807 "base_bdevs_list": [ 00:11:34.807 { 00:11:34.807 "name": "BaseBdev1", 00:11:34.807 "uuid": "6e6e1a7b-c7ec-4160-85cd-18e7a548a7aa", 00:11:34.807 "is_configured": true, 00:11:34.807 "data_offset": 0, 00:11:34.807 "data_size": 65536 00:11:34.807 }, 00:11:34.807 { 00:11:34.807 "name": "BaseBdev2", 00:11:34.807 "uuid": "f361c411-751f-46de-aed6-073529e57349", 00:11:34.807 "is_configured": true, 00:11:34.807 "data_offset": 0, 00:11:34.807 "data_size": 65536 00:11:34.807 }, 00:11:34.807 { 00:11:34.807 "name": "BaseBdev3", 00:11:34.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.807 "is_configured": false, 00:11:34.807 "data_offset": 0, 00:11:34.807 "data_size": 0 00:11:34.807 }, 00:11:34.807 { 00:11:34.807 "name": "BaseBdev4", 00:11:34.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.807 "is_configured": false, 00:11:34.807 "data_offset": 0, 00:11:34.807 "data_size": 0 00:11:34.807 } 00:11:34.807 ] 00:11:34.807 }' 00:11:34.807 14:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.807 14:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.066 14:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:35.066 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.066 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.326 [2024-12-09 14:44:13.211507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:35.326 BaseBdev3 00:11:35.326 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.326 14:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:35.326 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:35.326 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:35.326 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:35.326 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:35.326 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:35.326 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:35.326 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.326 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.326 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.326 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:35.326 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.326 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.326 [ 00:11:35.326 { 00:11:35.326 "name": "BaseBdev3", 00:11:35.326 "aliases": [ 00:11:35.326 "28832f12-2177-4cd0-b3b3-abeb92c67c5a" 00:11:35.326 ], 00:11:35.326 "product_name": "Malloc disk", 00:11:35.326 "block_size": 512, 00:11:35.326 "num_blocks": 65536, 00:11:35.326 "uuid": "28832f12-2177-4cd0-b3b3-abeb92c67c5a", 00:11:35.326 "assigned_rate_limits": { 00:11:35.326 "rw_ios_per_sec": 0, 00:11:35.326 "rw_mbytes_per_sec": 0, 00:11:35.326 "r_mbytes_per_sec": 0, 00:11:35.326 "w_mbytes_per_sec": 0 00:11:35.326 }, 00:11:35.326 "claimed": true, 00:11:35.326 "claim_type": "exclusive_write", 00:11:35.326 "zoned": false, 00:11:35.326 "supported_io_types": { 00:11:35.326 "read": true, 00:11:35.326 "write": true, 00:11:35.326 "unmap": true, 00:11:35.326 "flush": true, 00:11:35.326 "reset": true, 00:11:35.326 "nvme_admin": false, 00:11:35.326 "nvme_io": false, 00:11:35.326 "nvme_io_md": false, 00:11:35.326 "write_zeroes": true, 00:11:35.326 "zcopy": true, 00:11:35.326 "get_zone_info": false, 00:11:35.326 "zone_management": false, 00:11:35.326 "zone_append": false, 00:11:35.326 "compare": false, 00:11:35.326 "compare_and_write": false, 00:11:35.326 "abort": true, 00:11:35.326 "seek_hole": false, 00:11:35.326 "seek_data": false, 00:11:35.326 "copy": true, 00:11:35.326 "nvme_iov_md": false 00:11:35.326 }, 00:11:35.326 "memory_domains": [ 00:11:35.326 { 00:11:35.326 "dma_device_id": "system", 00:11:35.326 "dma_device_type": 1 00:11:35.326 }, 00:11:35.326 { 00:11:35.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.326 "dma_device_type": 2 00:11:35.326 } 00:11:35.326 ], 00:11:35.326 "driver_specific": {} 00:11:35.326 } 00:11:35.326 ] 00:11:35.326 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.326 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:35.326 14:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:35.326 14:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:35.326 14:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:35.326 14:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.326 14:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.326 14:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.326 14:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.326 14:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.326 14:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.326 14:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.326 14:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.326 14:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.326 14:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.326 14:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.326 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.326 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.326 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.326 14:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.326 "name": "Existed_Raid", 00:11:35.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.326 "strip_size_kb": 0, 00:11:35.326 "state": "configuring", 00:11:35.326 "raid_level": "raid1", 00:11:35.326 "superblock": false, 00:11:35.326 "num_base_bdevs": 4, 00:11:35.326 "num_base_bdevs_discovered": 3, 00:11:35.326 "num_base_bdevs_operational": 4, 00:11:35.326 "base_bdevs_list": [ 00:11:35.326 { 00:11:35.326 "name": "BaseBdev1", 00:11:35.326 "uuid": "6e6e1a7b-c7ec-4160-85cd-18e7a548a7aa", 00:11:35.326 "is_configured": true, 00:11:35.326 "data_offset": 0, 00:11:35.326 "data_size": 65536 00:11:35.326 }, 00:11:35.326 { 00:11:35.326 "name": "BaseBdev2", 00:11:35.326 "uuid": "f361c411-751f-46de-aed6-073529e57349", 00:11:35.326 "is_configured": true, 00:11:35.326 "data_offset": 0, 00:11:35.326 "data_size": 65536 00:11:35.326 }, 00:11:35.326 { 00:11:35.326 "name": "BaseBdev3", 00:11:35.326 "uuid": "28832f12-2177-4cd0-b3b3-abeb92c67c5a", 00:11:35.326 "is_configured": true, 00:11:35.326 "data_offset": 0, 00:11:35.326 "data_size": 65536 00:11:35.326 }, 00:11:35.326 { 00:11:35.326 "name": "BaseBdev4", 00:11:35.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.326 "is_configured": false, 00:11:35.326 "data_offset": 0, 00:11:35.326 "data_size": 0 00:11:35.326 } 00:11:35.326 ] 00:11:35.326 }' 00:11:35.326 14:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.326 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.895 14:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:35.895 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.895 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.895 [2024-12-09 14:44:13.783248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:35.895 [2024-12-09 14:44:13.783303] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:35.895 [2024-12-09 14:44:13.783312] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:35.895 [2024-12-09 14:44:13.783612] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:35.895 [2024-12-09 14:44:13.783828] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:35.895 [2024-12-09 14:44:13.783845] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:35.895 [2024-12-09 14:44:13.784143] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.895 BaseBdev4 00:11:35.895 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.895 14:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:35.895 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:35.895 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:35.895 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:35.895 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:35.895 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:35.895 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:35.895 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.895 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.895 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.895 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:35.895 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.895 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.895 [ 00:11:35.895 { 00:11:35.895 "name": "BaseBdev4", 00:11:35.895 "aliases": [ 00:11:35.895 "4366cd00-5a8c-4701-a0f9-0acfcceb0557" 00:11:35.895 ], 00:11:35.895 "product_name": "Malloc disk", 00:11:35.895 "block_size": 512, 00:11:35.895 "num_blocks": 65536, 00:11:35.895 "uuid": "4366cd00-5a8c-4701-a0f9-0acfcceb0557", 00:11:35.895 "assigned_rate_limits": { 00:11:35.895 "rw_ios_per_sec": 0, 00:11:35.895 "rw_mbytes_per_sec": 0, 00:11:35.895 "r_mbytes_per_sec": 0, 00:11:35.895 "w_mbytes_per_sec": 0 00:11:35.895 }, 00:11:35.895 "claimed": true, 00:11:35.895 "claim_type": "exclusive_write", 00:11:35.895 "zoned": false, 00:11:35.895 "supported_io_types": { 00:11:35.895 "read": true, 00:11:35.895 "write": true, 00:11:35.895 "unmap": true, 00:11:35.895 "flush": true, 00:11:35.895 "reset": true, 00:11:35.895 "nvme_admin": false, 00:11:35.895 "nvme_io": false, 00:11:35.895 "nvme_io_md": false, 00:11:35.895 "write_zeroes": true, 00:11:35.895 "zcopy": true, 00:11:35.895 "get_zone_info": false, 00:11:35.895 "zone_management": false, 00:11:35.895 "zone_append": false, 00:11:35.895 "compare": false, 00:11:35.895 "compare_and_write": false, 00:11:35.895 "abort": true, 00:11:35.895 "seek_hole": false, 00:11:35.895 "seek_data": false, 00:11:35.895 "copy": true, 00:11:35.895 "nvme_iov_md": false 00:11:35.895 }, 00:11:35.895 "memory_domains": [ 00:11:35.895 { 00:11:35.895 "dma_device_id": "system", 00:11:35.895 "dma_device_type": 1 00:11:35.895 }, 00:11:35.895 { 00:11:35.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.895 "dma_device_type": 2 00:11:35.895 } 00:11:35.895 ], 00:11:35.895 "driver_specific": {} 00:11:35.895 } 00:11:35.895 ] 00:11:35.895 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.895 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:35.895 14:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:35.895 14:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:35.895 14:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:35.895 14:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.895 14:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.895 14:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.895 14:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.895 14:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.895 14:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.895 14:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.895 14:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.895 14:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.895 14:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.895 14:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.895 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.895 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.895 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.895 14:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.895 "name": "Existed_Raid", 00:11:35.896 "uuid": "ab223595-a040-42d7-ae1d-259810b862e8", 00:11:35.896 "strip_size_kb": 0, 00:11:35.896 "state": "online", 00:11:35.896 "raid_level": "raid1", 00:11:35.896 "superblock": false, 00:11:35.896 "num_base_bdevs": 4, 00:11:35.896 "num_base_bdevs_discovered": 4, 00:11:35.896 "num_base_bdevs_operational": 4, 00:11:35.896 "base_bdevs_list": [ 00:11:35.896 { 00:11:35.896 "name": "BaseBdev1", 00:11:35.896 "uuid": "6e6e1a7b-c7ec-4160-85cd-18e7a548a7aa", 00:11:35.896 "is_configured": true, 00:11:35.896 "data_offset": 0, 00:11:35.896 "data_size": 65536 00:11:35.896 }, 00:11:35.896 { 00:11:35.896 "name": "BaseBdev2", 00:11:35.896 "uuid": "f361c411-751f-46de-aed6-073529e57349", 00:11:35.896 "is_configured": true, 00:11:35.896 "data_offset": 0, 00:11:35.896 "data_size": 65536 00:11:35.896 }, 00:11:35.896 { 00:11:35.896 "name": "BaseBdev3", 00:11:35.896 "uuid": "28832f12-2177-4cd0-b3b3-abeb92c67c5a", 00:11:35.896 "is_configured": true, 00:11:35.896 "data_offset": 0, 00:11:35.896 "data_size": 65536 00:11:35.896 }, 00:11:35.896 { 00:11:35.896 "name": "BaseBdev4", 00:11:35.896 "uuid": "4366cd00-5a8c-4701-a0f9-0acfcceb0557", 00:11:35.896 "is_configured": true, 00:11:35.896 "data_offset": 0, 00:11:35.896 "data_size": 65536 00:11:35.896 } 00:11:35.896 ] 00:11:35.896 }' 00:11:35.896 14:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.896 14:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.155 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:36.155 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:36.155 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:36.155 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:36.155 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:36.155 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:36.155 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:36.155 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:36.155 14:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.155 14:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.155 [2024-12-09 14:44:14.242891] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.155 14:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.414 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:36.414 "name": "Existed_Raid", 00:11:36.414 "aliases": [ 00:11:36.414 "ab223595-a040-42d7-ae1d-259810b862e8" 00:11:36.414 ], 00:11:36.414 "product_name": "Raid Volume", 00:11:36.414 "block_size": 512, 00:11:36.414 "num_blocks": 65536, 00:11:36.414 "uuid": "ab223595-a040-42d7-ae1d-259810b862e8", 00:11:36.414 "assigned_rate_limits": { 00:11:36.414 "rw_ios_per_sec": 0, 00:11:36.414 "rw_mbytes_per_sec": 0, 00:11:36.414 "r_mbytes_per_sec": 0, 00:11:36.414 "w_mbytes_per_sec": 0 00:11:36.414 }, 00:11:36.414 "claimed": false, 00:11:36.414 "zoned": false, 00:11:36.414 "supported_io_types": { 00:11:36.414 "read": true, 00:11:36.414 "write": true, 00:11:36.414 "unmap": false, 00:11:36.414 "flush": false, 00:11:36.414 "reset": true, 00:11:36.414 "nvme_admin": false, 00:11:36.414 "nvme_io": false, 00:11:36.414 "nvme_io_md": false, 00:11:36.414 "write_zeroes": true, 00:11:36.414 "zcopy": false, 00:11:36.414 "get_zone_info": false, 00:11:36.414 "zone_management": false, 00:11:36.414 "zone_append": false, 00:11:36.414 "compare": false, 00:11:36.414 "compare_and_write": false, 00:11:36.414 "abort": false, 00:11:36.414 "seek_hole": false, 00:11:36.414 "seek_data": false, 00:11:36.414 "copy": false, 00:11:36.414 "nvme_iov_md": false 00:11:36.414 }, 00:11:36.414 "memory_domains": [ 00:11:36.414 { 00:11:36.414 "dma_device_id": "system", 00:11:36.414 "dma_device_type": 1 00:11:36.414 }, 00:11:36.414 { 00:11:36.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.414 "dma_device_type": 2 00:11:36.414 }, 00:11:36.414 { 00:11:36.414 "dma_device_id": "system", 00:11:36.414 "dma_device_type": 1 00:11:36.414 }, 00:11:36.414 { 00:11:36.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.414 "dma_device_type": 2 00:11:36.414 }, 00:11:36.414 { 00:11:36.414 "dma_device_id": "system", 00:11:36.414 "dma_device_type": 1 00:11:36.414 }, 00:11:36.414 { 00:11:36.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.414 "dma_device_type": 2 00:11:36.414 }, 00:11:36.414 { 00:11:36.414 "dma_device_id": "system", 00:11:36.414 "dma_device_type": 1 00:11:36.414 }, 00:11:36.414 { 00:11:36.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.414 "dma_device_type": 2 00:11:36.414 } 00:11:36.414 ], 00:11:36.414 "driver_specific": { 00:11:36.414 "raid": { 00:11:36.414 "uuid": "ab223595-a040-42d7-ae1d-259810b862e8", 00:11:36.414 "strip_size_kb": 0, 00:11:36.414 "state": "online", 00:11:36.414 "raid_level": "raid1", 00:11:36.414 "superblock": false, 00:11:36.414 "num_base_bdevs": 4, 00:11:36.414 "num_base_bdevs_discovered": 4, 00:11:36.414 "num_base_bdevs_operational": 4, 00:11:36.414 "base_bdevs_list": [ 00:11:36.414 { 00:11:36.414 "name": "BaseBdev1", 00:11:36.414 "uuid": "6e6e1a7b-c7ec-4160-85cd-18e7a548a7aa", 00:11:36.414 "is_configured": true, 00:11:36.414 "data_offset": 0, 00:11:36.414 "data_size": 65536 00:11:36.414 }, 00:11:36.414 { 00:11:36.414 "name": "BaseBdev2", 00:11:36.414 "uuid": "f361c411-751f-46de-aed6-073529e57349", 00:11:36.414 "is_configured": true, 00:11:36.414 "data_offset": 0, 00:11:36.414 "data_size": 65536 00:11:36.414 }, 00:11:36.414 { 00:11:36.414 "name": "BaseBdev3", 00:11:36.414 "uuid": "28832f12-2177-4cd0-b3b3-abeb92c67c5a", 00:11:36.414 "is_configured": true, 00:11:36.414 "data_offset": 0, 00:11:36.414 "data_size": 65536 00:11:36.414 }, 00:11:36.414 { 00:11:36.414 "name": "BaseBdev4", 00:11:36.414 "uuid": "4366cd00-5a8c-4701-a0f9-0acfcceb0557", 00:11:36.414 "is_configured": true, 00:11:36.414 "data_offset": 0, 00:11:36.414 "data_size": 65536 00:11:36.414 } 00:11:36.414 ] 00:11:36.414 } 00:11:36.414 } 00:11:36.414 }' 00:11:36.414 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:36.414 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:36.414 BaseBdev2 00:11:36.414 BaseBdev3 00:11:36.414 BaseBdev4' 00:11:36.414 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.414 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:36.414 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.414 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:36.414 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.414 14:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.414 14:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.414 14:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.414 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.414 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.414 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.414 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:36.414 14:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.414 14:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.415 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.415 14:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.415 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.415 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.415 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.415 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:36.415 14:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.415 14:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.415 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.415 14:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.415 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.415 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.415 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.415 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:36.415 14:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.415 14:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.415 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.673 14:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.673 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.673 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.673 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:36.673 14:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.673 14:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.673 [2024-12-09 14:44:14.578060] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:36.673 14:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.673 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:36.673 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:36.673 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:36.673 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:36.673 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:36.673 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:36.673 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.673 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.673 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.673 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.673 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:36.673 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.673 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.673 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.673 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.673 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.673 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.673 14:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.673 14:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.673 14:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.673 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.673 "name": "Existed_Raid", 00:11:36.673 "uuid": "ab223595-a040-42d7-ae1d-259810b862e8", 00:11:36.673 "strip_size_kb": 0, 00:11:36.673 "state": "online", 00:11:36.673 "raid_level": "raid1", 00:11:36.673 "superblock": false, 00:11:36.673 "num_base_bdevs": 4, 00:11:36.673 "num_base_bdevs_discovered": 3, 00:11:36.673 "num_base_bdevs_operational": 3, 00:11:36.673 "base_bdevs_list": [ 00:11:36.673 { 00:11:36.673 "name": null, 00:11:36.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.673 "is_configured": false, 00:11:36.673 "data_offset": 0, 00:11:36.673 "data_size": 65536 00:11:36.673 }, 00:11:36.673 { 00:11:36.673 "name": "BaseBdev2", 00:11:36.673 "uuid": "f361c411-751f-46de-aed6-073529e57349", 00:11:36.673 "is_configured": true, 00:11:36.673 "data_offset": 0, 00:11:36.673 "data_size": 65536 00:11:36.673 }, 00:11:36.673 { 00:11:36.673 "name": "BaseBdev3", 00:11:36.673 "uuid": "28832f12-2177-4cd0-b3b3-abeb92c67c5a", 00:11:36.673 "is_configured": true, 00:11:36.673 "data_offset": 0, 00:11:36.673 "data_size": 65536 00:11:36.673 }, 00:11:36.673 { 00:11:36.673 "name": "BaseBdev4", 00:11:36.673 "uuid": "4366cd00-5a8c-4701-a0f9-0acfcceb0557", 00:11:36.673 "is_configured": true, 00:11:36.673 "data_offset": 0, 00:11:36.673 "data_size": 65536 00:11:36.673 } 00:11:36.673 ] 00:11:36.673 }' 00:11:36.673 14:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.673 14:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.241 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:37.241 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:37.241 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.241 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:37.241 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.241 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.241 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.241 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:37.241 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:37.241 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:37.241 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.241 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.241 [2024-12-09 14:44:15.200767] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:37.241 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.241 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:37.241 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:37.241 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.241 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.241 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:37.241 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.241 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.241 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:37.241 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:37.241 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:37.241 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.241 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.241 [2024-12-09 14:44:15.356867] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:37.501 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.501 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:37.501 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:37.501 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.501 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:37.501 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.501 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.501 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.501 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:37.501 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:37.501 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:37.501 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.501 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.501 [2024-12-09 14:44:15.519249] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:37.501 [2024-12-09 14:44:15.519407] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:37.501 [2024-12-09 14:44:15.621980] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:37.501 [2024-12-09 14:44:15.622129] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:37.501 [2024-12-09 14:44:15.622178] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.761 BaseBdev2 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.761 [ 00:11:37.761 { 00:11:37.761 "name": "BaseBdev2", 00:11:37.761 "aliases": [ 00:11:37.761 "6eeeac32-72fc-49d3-ae72-398ea7e25698" 00:11:37.761 ], 00:11:37.761 "product_name": "Malloc disk", 00:11:37.761 "block_size": 512, 00:11:37.761 "num_blocks": 65536, 00:11:37.761 "uuid": "6eeeac32-72fc-49d3-ae72-398ea7e25698", 00:11:37.761 "assigned_rate_limits": { 00:11:37.761 "rw_ios_per_sec": 0, 00:11:37.761 "rw_mbytes_per_sec": 0, 00:11:37.761 "r_mbytes_per_sec": 0, 00:11:37.761 "w_mbytes_per_sec": 0 00:11:37.761 }, 00:11:37.761 "claimed": false, 00:11:37.761 "zoned": false, 00:11:37.761 "supported_io_types": { 00:11:37.761 "read": true, 00:11:37.761 "write": true, 00:11:37.761 "unmap": true, 00:11:37.761 "flush": true, 00:11:37.761 "reset": true, 00:11:37.761 "nvme_admin": false, 00:11:37.761 "nvme_io": false, 00:11:37.761 "nvme_io_md": false, 00:11:37.761 "write_zeroes": true, 00:11:37.761 "zcopy": true, 00:11:37.761 "get_zone_info": false, 00:11:37.761 "zone_management": false, 00:11:37.761 "zone_append": false, 00:11:37.761 "compare": false, 00:11:37.761 "compare_and_write": false, 00:11:37.761 "abort": true, 00:11:37.761 "seek_hole": false, 00:11:37.761 "seek_data": false, 00:11:37.761 "copy": true, 00:11:37.761 "nvme_iov_md": false 00:11:37.761 }, 00:11:37.761 "memory_domains": [ 00:11:37.761 { 00:11:37.761 "dma_device_id": "system", 00:11:37.761 "dma_device_type": 1 00:11:37.761 }, 00:11:37.761 { 00:11:37.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.761 "dma_device_type": 2 00:11:37.761 } 00:11:37.761 ], 00:11:37.761 "driver_specific": {} 00:11:37.761 } 00:11:37.761 ] 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.761 BaseBdev3 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:37.761 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:37.762 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.762 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.762 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.762 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:37.762 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.762 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.762 [ 00:11:37.762 { 00:11:37.762 "name": "BaseBdev3", 00:11:37.762 "aliases": [ 00:11:37.762 "7bc6db68-0e2b-476c-93d0-a9f2dbdd61fd" 00:11:37.762 ], 00:11:37.762 "product_name": "Malloc disk", 00:11:37.762 "block_size": 512, 00:11:37.762 "num_blocks": 65536, 00:11:37.762 "uuid": "7bc6db68-0e2b-476c-93d0-a9f2dbdd61fd", 00:11:37.762 "assigned_rate_limits": { 00:11:37.762 "rw_ios_per_sec": 0, 00:11:37.762 "rw_mbytes_per_sec": 0, 00:11:37.762 "r_mbytes_per_sec": 0, 00:11:37.762 "w_mbytes_per_sec": 0 00:11:37.762 }, 00:11:37.762 "claimed": false, 00:11:37.762 "zoned": false, 00:11:37.762 "supported_io_types": { 00:11:37.762 "read": true, 00:11:37.762 "write": true, 00:11:37.762 "unmap": true, 00:11:37.762 "flush": true, 00:11:37.762 "reset": true, 00:11:37.762 "nvme_admin": false, 00:11:37.762 "nvme_io": false, 00:11:37.762 "nvme_io_md": false, 00:11:37.762 "write_zeroes": true, 00:11:37.762 "zcopy": true, 00:11:37.762 "get_zone_info": false, 00:11:37.762 "zone_management": false, 00:11:37.762 "zone_append": false, 00:11:37.762 "compare": false, 00:11:37.762 "compare_and_write": false, 00:11:37.762 "abort": true, 00:11:37.762 "seek_hole": false, 00:11:37.762 "seek_data": false, 00:11:37.762 "copy": true, 00:11:37.762 "nvme_iov_md": false 00:11:37.762 }, 00:11:37.762 "memory_domains": [ 00:11:37.762 { 00:11:37.762 "dma_device_id": "system", 00:11:37.762 "dma_device_type": 1 00:11:37.762 }, 00:11:37.762 { 00:11:37.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.762 "dma_device_type": 2 00:11:37.762 } 00:11:37.762 ], 00:11:37.762 "driver_specific": {} 00:11:37.762 } 00:11:37.762 ] 00:11:37.762 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.762 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:37.762 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:37.762 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:37.762 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:37.762 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.762 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.022 BaseBdev4 00:11:38.022 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.022 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:38.022 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:38.022 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:38.022 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:38.022 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:38.022 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:38.022 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:38.022 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.022 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.022 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.022 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:38.022 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.022 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.022 [ 00:11:38.022 { 00:11:38.022 "name": "BaseBdev4", 00:11:38.022 "aliases": [ 00:11:38.022 "050ac87d-19d8-4dfd-8b06-56b7db8035b6" 00:11:38.022 ], 00:11:38.022 "product_name": "Malloc disk", 00:11:38.022 "block_size": 512, 00:11:38.022 "num_blocks": 65536, 00:11:38.022 "uuid": "050ac87d-19d8-4dfd-8b06-56b7db8035b6", 00:11:38.022 "assigned_rate_limits": { 00:11:38.022 "rw_ios_per_sec": 0, 00:11:38.022 "rw_mbytes_per_sec": 0, 00:11:38.022 "r_mbytes_per_sec": 0, 00:11:38.022 "w_mbytes_per_sec": 0 00:11:38.022 }, 00:11:38.022 "claimed": false, 00:11:38.022 "zoned": false, 00:11:38.022 "supported_io_types": { 00:11:38.022 "read": true, 00:11:38.022 "write": true, 00:11:38.022 "unmap": true, 00:11:38.022 "flush": true, 00:11:38.022 "reset": true, 00:11:38.022 "nvme_admin": false, 00:11:38.022 "nvme_io": false, 00:11:38.022 "nvme_io_md": false, 00:11:38.022 "write_zeroes": true, 00:11:38.022 "zcopy": true, 00:11:38.022 "get_zone_info": false, 00:11:38.022 "zone_management": false, 00:11:38.022 "zone_append": false, 00:11:38.022 "compare": false, 00:11:38.022 "compare_and_write": false, 00:11:38.022 "abort": true, 00:11:38.022 "seek_hole": false, 00:11:38.022 "seek_data": false, 00:11:38.022 "copy": true, 00:11:38.022 "nvme_iov_md": false 00:11:38.022 }, 00:11:38.022 "memory_domains": [ 00:11:38.022 { 00:11:38.022 "dma_device_id": "system", 00:11:38.022 "dma_device_type": 1 00:11:38.022 }, 00:11:38.022 { 00:11:38.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.022 "dma_device_type": 2 00:11:38.022 } 00:11:38.022 ], 00:11:38.022 "driver_specific": {} 00:11:38.022 } 00:11:38.022 ] 00:11:38.022 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.022 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:38.022 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:38.022 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:38.022 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:38.022 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.022 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.022 [2024-12-09 14:44:15.936856] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:38.022 [2024-12-09 14:44:15.936953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:38.022 [2024-12-09 14:44:15.937020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:38.022 [2024-12-09 14:44:15.939149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:38.022 [2024-12-09 14:44:15.939251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:38.023 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.023 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:38.023 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.023 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.023 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.023 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.023 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.023 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.023 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.023 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.023 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.023 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.023 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.023 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.023 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.023 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.023 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.023 "name": "Existed_Raid", 00:11:38.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.023 "strip_size_kb": 0, 00:11:38.023 "state": "configuring", 00:11:38.023 "raid_level": "raid1", 00:11:38.023 "superblock": false, 00:11:38.023 "num_base_bdevs": 4, 00:11:38.023 "num_base_bdevs_discovered": 3, 00:11:38.023 "num_base_bdevs_operational": 4, 00:11:38.023 "base_bdevs_list": [ 00:11:38.023 { 00:11:38.023 "name": "BaseBdev1", 00:11:38.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.023 "is_configured": false, 00:11:38.023 "data_offset": 0, 00:11:38.023 "data_size": 0 00:11:38.023 }, 00:11:38.023 { 00:11:38.023 "name": "BaseBdev2", 00:11:38.023 "uuid": "6eeeac32-72fc-49d3-ae72-398ea7e25698", 00:11:38.023 "is_configured": true, 00:11:38.023 "data_offset": 0, 00:11:38.023 "data_size": 65536 00:11:38.023 }, 00:11:38.023 { 00:11:38.023 "name": "BaseBdev3", 00:11:38.023 "uuid": "7bc6db68-0e2b-476c-93d0-a9f2dbdd61fd", 00:11:38.023 "is_configured": true, 00:11:38.023 "data_offset": 0, 00:11:38.023 "data_size": 65536 00:11:38.023 }, 00:11:38.023 { 00:11:38.023 "name": "BaseBdev4", 00:11:38.023 "uuid": "050ac87d-19d8-4dfd-8b06-56b7db8035b6", 00:11:38.023 "is_configured": true, 00:11:38.023 "data_offset": 0, 00:11:38.023 "data_size": 65536 00:11:38.023 } 00:11:38.023 ] 00:11:38.023 }' 00:11:38.023 14:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.023 14:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.591 14:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:38.591 14:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.591 14:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.591 [2024-12-09 14:44:16.416046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:38.591 14:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.591 14:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:38.591 14:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.591 14:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.591 14:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.591 14:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.591 14:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.591 14:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.591 14:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.591 14:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.591 14:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.591 14:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.591 14:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.591 14:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.591 14:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.591 14:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.591 14:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.591 "name": "Existed_Raid", 00:11:38.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.591 "strip_size_kb": 0, 00:11:38.591 "state": "configuring", 00:11:38.591 "raid_level": "raid1", 00:11:38.591 "superblock": false, 00:11:38.591 "num_base_bdevs": 4, 00:11:38.591 "num_base_bdevs_discovered": 2, 00:11:38.591 "num_base_bdevs_operational": 4, 00:11:38.591 "base_bdevs_list": [ 00:11:38.591 { 00:11:38.591 "name": "BaseBdev1", 00:11:38.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.591 "is_configured": false, 00:11:38.591 "data_offset": 0, 00:11:38.591 "data_size": 0 00:11:38.591 }, 00:11:38.591 { 00:11:38.591 "name": null, 00:11:38.591 "uuid": "6eeeac32-72fc-49d3-ae72-398ea7e25698", 00:11:38.591 "is_configured": false, 00:11:38.591 "data_offset": 0, 00:11:38.591 "data_size": 65536 00:11:38.591 }, 00:11:38.591 { 00:11:38.591 "name": "BaseBdev3", 00:11:38.591 "uuid": "7bc6db68-0e2b-476c-93d0-a9f2dbdd61fd", 00:11:38.591 "is_configured": true, 00:11:38.591 "data_offset": 0, 00:11:38.591 "data_size": 65536 00:11:38.591 }, 00:11:38.591 { 00:11:38.591 "name": "BaseBdev4", 00:11:38.591 "uuid": "050ac87d-19d8-4dfd-8b06-56b7db8035b6", 00:11:38.591 "is_configured": true, 00:11:38.591 "data_offset": 0, 00:11:38.591 "data_size": 65536 00:11:38.591 } 00:11:38.591 ] 00:11:38.591 }' 00:11:38.591 14:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.591 14:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.851 14:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.851 14:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.851 14:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.851 14:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:38.851 14:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.851 14:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:38.851 14:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:38.851 14:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.851 14:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.113 [2024-12-09 14:44:16.997002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:39.113 BaseBdev1 00:11:39.113 14:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.113 14:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:39.113 14:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:39.113 14:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:39.113 14:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:39.113 14:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:39.113 14:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:39.113 14:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:39.113 14:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.113 14:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.113 14:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.113 14:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:39.113 14:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.113 14:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.113 [ 00:11:39.113 { 00:11:39.113 "name": "BaseBdev1", 00:11:39.113 "aliases": [ 00:11:39.113 "113ac9f1-c1c3-41e4-948c-8605726e5fe9" 00:11:39.113 ], 00:11:39.113 "product_name": "Malloc disk", 00:11:39.113 "block_size": 512, 00:11:39.113 "num_blocks": 65536, 00:11:39.113 "uuid": "113ac9f1-c1c3-41e4-948c-8605726e5fe9", 00:11:39.113 "assigned_rate_limits": { 00:11:39.113 "rw_ios_per_sec": 0, 00:11:39.113 "rw_mbytes_per_sec": 0, 00:11:39.113 "r_mbytes_per_sec": 0, 00:11:39.113 "w_mbytes_per_sec": 0 00:11:39.113 }, 00:11:39.113 "claimed": true, 00:11:39.113 "claim_type": "exclusive_write", 00:11:39.113 "zoned": false, 00:11:39.113 "supported_io_types": { 00:11:39.113 "read": true, 00:11:39.113 "write": true, 00:11:39.113 "unmap": true, 00:11:39.113 "flush": true, 00:11:39.113 "reset": true, 00:11:39.113 "nvme_admin": false, 00:11:39.113 "nvme_io": false, 00:11:39.113 "nvme_io_md": false, 00:11:39.113 "write_zeroes": true, 00:11:39.113 "zcopy": true, 00:11:39.113 "get_zone_info": false, 00:11:39.113 "zone_management": false, 00:11:39.113 "zone_append": false, 00:11:39.113 "compare": false, 00:11:39.113 "compare_and_write": false, 00:11:39.113 "abort": true, 00:11:39.113 "seek_hole": false, 00:11:39.113 "seek_data": false, 00:11:39.113 "copy": true, 00:11:39.113 "nvme_iov_md": false 00:11:39.113 }, 00:11:39.113 "memory_domains": [ 00:11:39.113 { 00:11:39.113 "dma_device_id": "system", 00:11:39.113 "dma_device_type": 1 00:11:39.113 }, 00:11:39.113 { 00:11:39.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.113 "dma_device_type": 2 00:11:39.113 } 00:11:39.113 ], 00:11:39.113 "driver_specific": {} 00:11:39.113 } 00:11:39.113 ] 00:11:39.113 14:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.113 14:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:39.113 14:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:39.113 14:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.113 14:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.113 14:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.113 14:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.113 14:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.113 14:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.113 14:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.113 14:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.113 14:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.113 14:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.113 14:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.113 14:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.113 14:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.113 14:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.113 14:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.113 "name": "Existed_Raid", 00:11:39.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.113 "strip_size_kb": 0, 00:11:39.113 "state": "configuring", 00:11:39.113 "raid_level": "raid1", 00:11:39.113 "superblock": false, 00:11:39.113 "num_base_bdevs": 4, 00:11:39.113 "num_base_bdevs_discovered": 3, 00:11:39.113 "num_base_bdevs_operational": 4, 00:11:39.113 "base_bdevs_list": [ 00:11:39.113 { 00:11:39.113 "name": "BaseBdev1", 00:11:39.113 "uuid": "113ac9f1-c1c3-41e4-948c-8605726e5fe9", 00:11:39.113 "is_configured": true, 00:11:39.113 "data_offset": 0, 00:11:39.113 "data_size": 65536 00:11:39.113 }, 00:11:39.113 { 00:11:39.113 "name": null, 00:11:39.113 "uuid": "6eeeac32-72fc-49d3-ae72-398ea7e25698", 00:11:39.113 "is_configured": false, 00:11:39.113 "data_offset": 0, 00:11:39.113 "data_size": 65536 00:11:39.113 }, 00:11:39.113 { 00:11:39.113 "name": "BaseBdev3", 00:11:39.113 "uuid": "7bc6db68-0e2b-476c-93d0-a9f2dbdd61fd", 00:11:39.113 "is_configured": true, 00:11:39.113 "data_offset": 0, 00:11:39.113 "data_size": 65536 00:11:39.113 }, 00:11:39.113 { 00:11:39.113 "name": "BaseBdev4", 00:11:39.113 "uuid": "050ac87d-19d8-4dfd-8b06-56b7db8035b6", 00:11:39.113 "is_configured": true, 00:11:39.113 "data_offset": 0, 00:11:39.113 "data_size": 65536 00:11:39.113 } 00:11:39.113 ] 00:11:39.113 }' 00:11:39.113 14:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.113 14:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.682 14:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:39.682 14:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.682 14:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.682 14:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.682 14:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.682 14:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:39.682 14:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:39.682 14:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.682 14:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.682 [2024-12-09 14:44:17.548177] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:39.682 14:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.682 14:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:39.682 14:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.682 14:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.682 14:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.682 14:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.682 14:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.682 14:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.682 14:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.683 14:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.683 14:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.683 14:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.683 14:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.683 14:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.683 14:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.683 14:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.683 14:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.683 "name": "Existed_Raid", 00:11:39.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.683 "strip_size_kb": 0, 00:11:39.683 "state": "configuring", 00:11:39.683 "raid_level": "raid1", 00:11:39.683 "superblock": false, 00:11:39.683 "num_base_bdevs": 4, 00:11:39.683 "num_base_bdevs_discovered": 2, 00:11:39.683 "num_base_bdevs_operational": 4, 00:11:39.683 "base_bdevs_list": [ 00:11:39.683 { 00:11:39.683 "name": "BaseBdev1", 00:11:39.683 "uuid": "113ac9f1-c1c3-41e4-948c-8605726e5fe9", 00:11:39.683 "is_configured": true, 00:11:39.683 "data_offset": 0, 00:11:39.683 "data_size": 65536 00:11:39.683 }, 00:11:39.683 { 00:11:39.683 "name": null, 00:11:39.683 "uuid": "6eeeac32-72fc-49d3-ae72-398ea7e25698", 00:11:39.683 "is_configured": false, 00:11:39.683 "data_offset": 0, 00:11:39.683 "data_size": 65536 00:11:39.683 }, 00:11:39.683 { 00:11:39.683 "name": null, 00:11:39.683 "uuid": "7bc6db68-0e2b-476c-93d0-a9f2dbdd61fd", 00:11:39.683 "is_configured": false, 00:11:39.683 "data_offset": 0, 00:11:39.683 "data_size": 65536 00:11:39.683 }, 00:11:39.683 { 00:11:39.683 "name": "BaseBdev4", 00:11:39.683 "uuid": "050ac87d-19d8-4dfd-8b06-56b7db8035b6", 00:11:39.683 "is_configured": true, 00:11:39.683 "data_offset": 0, 00:11:39.683 "data_size": 65536 00:11:39.683 } 00:11:39.683 ] 00:11:39.683 }' 00:11:39.683 14:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.683 14:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.942 14:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.942 14:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:39.942 14:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.942 14:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.942 14:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.942 14:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:39.943 14:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:39.943 14:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.943 14:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.943 [2024-12-09 14:44:18.063294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:40.202 14:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.202 14:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:40.202 14:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.202 14:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.202 14:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.202 14:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.202 14:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.202 14:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.202 14:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.202 14:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.202 14:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.202 14:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.202 14:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.202 14:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.202 14:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.202 14:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.202 14:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.202 "name": "Existed_Raid", 00:11:40.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.202 "strip_size_kb": 0, 00:11:40.202 "state": "configuring", 00:11:40.202 "raid_level": "raid1", 00:11:40.202 "superblock": false, 00:11:40.202 "num_base_bdevs": 4, 00:11:40.202 "num_base_bdevs_discovered": 3, 00:11:40.203 "num_base_bdevs_operational": 4, 00:11:40.203 "base_bdevs_list": [ 00:11:40.203 { 00:11:40.203 "name": "BaseBdev1", 00:11:40.203 "uuid": "113ac9f1-c1c3-41e4-948c-8605726e5fe9", 00:11:40.203 "is_configured": true, 00:11:40.203 "data_offset": 0, 00:11:40.203 "data_size": 65536 00:11:40.203 }, 00:11:40.203 { 00:11:40.203 "name": null, 00:11:40.203 "uuid": "6eeeac32-72fc-49d3-ae72-398ea7e25698", 00:11:40.203 "is_configured": false, 00:11:40.203 "data_offset": 0, 00:11:40.203 "data_size": 65536 00:11:40.203 }, 00:11:40.203 { 00:11:40.203 "name": "BaseBdev3", 00:11:40.203 "uuid": "7bc6db68-0e2b-476c-93d0-a9f2dbdd61fd", 00:11:40.203 "is_configured": true, 00:11:40.203 "data_offset": 0, 00:11:40.203 "data_size": 65536 00:11:40.203 }, 00:11:40.203 { 00:11:40.203 "name": "BaseBdev4", 00:11:40.203 "uuid": "050ac87d-19d8-4dfd-8b06-56b7db8035b6", 00:11:40.203 "is_configured": true, 00:11:40.203 "data_offset": 0, 00:11:40.203 "data_size": 65536 00:11:40.203 } 00:11:40.203 ] 00:11:40.203 }' 00:11:40.203 14:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.203 14:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.463 14:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:40.463 14:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.463 14:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.463 14:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.463 14:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.463 14:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:40.463 14:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:40.463 14:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.463 14:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.463 [2024-12-09 14:44:18.562454] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:40.722 14:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.722 14:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:40.722 14:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.722 14:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.722 14:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.722 14:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.722 14:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.722 14:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.722 14:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.722 14:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.722 14:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.722 14:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.722 14:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.722 14:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.722 14:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.722 14:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.722 14:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.722 "name": "Existed_Raid", 00:11:40.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.722 "strip_size_kb": 0, 00:11:40.722 "state": "configuring", 00:11:40.722 "raid_level": "raid1", 00:11:40.722 "superblock": false, 00:11:40.722 "num_base_bdevs": 4, 00:11:40.722 "num_base_bdevs_discovered": 2, 00:11:40.722 "num_base_bdevs_operational": 4, 00:11:40.722 "base_bdevs_list": [ 00:11:40.722 { 00:11:40.722 "name": null, 00:11:40.722 "uuid": "113ac9f1-c1c3-41e4-948c-8605726e5fe9", 00:11:40.722 "is_configured": false, 00:11:40.722 "data_offset": 0, 00:11:40.722 "data_size": 65536 00:11:40.722 }, 00:11:40.723 { 00:11:40.723 "name": null, 00:11:40.723 "uuid": "6eeeac32-72fc-49d3-ae72-398ea7e25698", 00:11:40.723 "is_configured": false, 00:11:40.723 "data_offset": 0, 00:11:40.723 "data_size": 65536 00:11:40.723 }, 00:11:40.723 { 00:11:40.723 "name": "BaseBdev3", 00:11:40.723 "uuid": "7bc6db68-0e2b-476c-93d0-a9f2dbdd61fd", 00:11:40.723 "is_configured": true, 00:11:40.723 "data_offset": 0, 00:11:40.723 "data_size": 65536 00:11:40.723 }, 00:11:40.723 { 00:11:40.723 "name": "BaseBdev4", 00:11:40.723 "uuid": "050ac87d-19d8-4dfd-8b06-56b7db8035b6", 00:11:40.723 "is_configured": true, 00:11:40.723 "data_offset": 0, 00:11:40.723 "data_size": 65536 00:11:40.723 } 00:11:40.723 ] 00:11:40.723 }' 00:11:40.723 14:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.723 14:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.292 14:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.292 14:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.292 14:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.292 14:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:41.292 14:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.293 14:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:41.293 14:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:41.293 14:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.293 14:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.293 [2024-12-09 14:44:19.198716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:41.293 14:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.293 14:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:41.293 14:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.293 14:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.293 14:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.293 14:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.293 14:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.293 14:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.293 14:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.293 14:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.293 14:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.293 14:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.293 14:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.293 14:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.293 14:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.293 14:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.293 14:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.293 "name": "Existed_Raid", 00:11:41.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.293 "strip_size_kb": 0, 00:11:41.293 "state": "configuring", 00:11:41.293 "raid_level": "raid1", 00:11:41.293 "superblock": false, 00:11:41.293 "num_base_bdevs": 4, 00:11:41.293 "num_base_bdevs_discovered": 3, 00:11:41.293 "num_base_bdevs_operational": 4, 00:11:41.293 "base_bdevs_list": [ 00:11:41.293 { 00:11:41.293 "name": null, 00:11:41.293 "uuid": "113ac9f1-c1c3-41e4-948c-8605726e5fe9", 00:11:41.293 "is_configured": false, 00:11:41.293 "data_offset": 0, 00:11:41.293 "data_size": 65536 00:11:41.293 }, 00:11:41.293 { 00:11:41.293 "name": "BaseBdev2", 00:11:41.293 "uuid": "6eeeac32-72fc-49d3-ae72-398ea7e25698", 00:11:41.293 "is_configured": true, 00:11:41.293 "data_offset": 0, 00:11:41.293 "data_size": 65536 00:11:41.293 }, 00:11:41.293 { 00:11:41.293 "name": "BaseBdev3", 00:11:41.293 "uuid": "7bc6db68-0e2b-476c-93d0-a9f2dbdd61fd", 00:11:41.293 "is_configured": true, 00:11:41.293 "data_offset": 0, 00:11:41.293 "data_size": 65536 00:11:41.293 }, 00:11:41.293 { 00:11:41.293 "name": "BaseBdev4", 00:11:41.293 "uuid": "050ac87d-19d8-4dfd-8b06-56b7db8035b6", 00:11:41.293 "is_configured": true, 00:11:41.293 "data_offset": 0, 00:11:41.293 "data_size": 65536 00:11:41.293 } 00:11:41.293 ] 00:11:41.293 }' 00:11:41.293 14:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.293 14:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.553 14:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.553 14:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:41.553 14:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.553 14:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.553 14:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.553 14:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:41.553 14:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.553 14:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.553 14:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.553 14:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:41.553 14:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.553 14:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 113ac9f1-c1c3-41e4-948c-8605726e5fe9 00:11:41.553 14:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.553 14:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.813 [2024-12-09 14:44:19.715231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:41.813 [2024-12-09 14:44:19.715369] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:41.813 [2024-12-09 14:44:19.715401] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:41.813 [2024-12-09 14:44:19.715775] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:41.813 [2024-12-09 14:44:19.716008] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:41.813 [2024-12-09 14:44:19.716060] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:41.813 [2024-12-09 14:44:19.716410] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:41.813 NewBaseBdev 00:11:41.813 14:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.813 14:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:41.813 14:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:41.813 14:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:41.813 14:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:41.813 14:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:41.813 14:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:41.813 14:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:41.813 14:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.813 14:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.813 14:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.813 14:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:41.813 14:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.813 14:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.813 [ 00:11:41.813 { 00:11:41.813 "name": "NewBaseBdev", 00:11:41.813 "aliases": [ 00:11:41.813 "113ac9f1-c1c3-41e4-948c-8605726e5fe9" 00:11:41.813 ], 00:11:41.813 "product_name": "Malloc disk", 00:11:41.813 "block_size": 512, 00:11:41.813 "num_blocks": 65536, 00:11:41.813 "uuid": "113ac9f1-c1c3-41e4-948c-8605726e5fe9", 00:11:41.813 "assigned_rate_limits": { 00:11:41.813 "rw_ios_per_sec": 0, 00:11:41.813 "rw_mbytes_per_sec": 0, 00:11:41.813 "r_mbytes_per_sec": 0, 00:11:41.813 "w_mbytes_per_sec": 0 00:11:41.813 }, 00:11:41.813 "claimed": true, 00:11:41.813 "claim_type": "exclusive_write", 00:11:41.813 "zoned": false, 00:11:41.813 "supported_io_types": { 00:11:41.813 "read": true, 00:11:41.813 "write": true, 00:11:41.813 "unmap": true, 00:11:41.813 "flush": true, 00:11:41.813 "reset": true, 00:11:41.813 "nvme_admin": false, 00:11:41.813 "nvme_io": false, 00:11:41.813 "nvme_io_md": false, 00:11:41.813 "write_zeroes": true, 00:11:41.813 "zcopy": true, 00:11:41.813 "get_zone_info": false, 00:11:41.813 "zone_management": false, 00:11:41.813 "zone_append": false, 00:11:41.813 "compare": false, 00:11:41.813 "compare_and_write": false, 00:11:41.813 "abort": true, 00:11:41.813 "seek_hole": false, 00:11:41.813 "seek_data": false, 00:11:41.813 "copy": true, 00:11:41.813 "nvme_iov_md": false 00:11:41.813 }, 00:11:41.813 "memory_domains": [ 00:11:41.813 { 00:11:41.813 "dma_device_id": "system", 00:11:41.813 "dma_device_type": 1 00:11:41.813 }, 00:11:41.813 { 00:11:41.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.813 "dma_device_type": 2 00:11:41.813 } 00:11:41.813 ], 00:11:41.813 "driver_specific": {} 00:11:41.813 } 00:11:41.813 ] 00:11:41.813 14:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.813 14:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:41.813 14:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:41.813 14:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.813 14:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.813 14:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.813 14:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.813 14:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.813 14:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.813 14:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.813 14:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.813 14:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.813 14:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.813 14:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.813 14:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.813 14:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.813 14:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.813 14:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.814 "name": "Existed_Raid", 00:11:41.814 "uuid": "d95b0281-ebd2-40b9-999d-e6aae61b53ec", 00:11:41.814 "strip_size_kb": 0, 00:11:41.814 "state": "online", 00:11:41.814 "raid_level": "raid1", 00:11:41.814 "superblock": false, 00:11:41.814 "num_base_bdevs": 4, 00:11:41.814 "num_base_bdevs_discovered": 4, 00:11:41.814 "num_base_bdevs_operational": 4, 00:11:41.814 "base_bdevs_list": [ 00:11:41.814 { 00:11:41.814 "name": "NewBaseBdev", 00:11:41.814 "uuid": "113ac9f1-c1c3-41e4-948c-8605726e5fe9", 00:11:41.814 "is_configured": true, 00:11:41.814 "data_offset": 0, 00:11:41.814 "data_size": 65536 00:11:41.814 }, 00:11:41.814 { 00:11:41.814 "name": "BaseBdev2", 00:11:41.814 "uuid": "6eeeac32-72fc-49d3-ae72-398ea7e25698", 00:11:41.814 "is_configured": true, 00:11:41.814 "data_offset": 0, 00:11:41.814 "data_size": 65536 00:11:41.814 }, 00:11:41.814 { 00:11:41.814 "name": "BaseBdev3", 00:11:41.814 "uuid": "7bc6db68-0e2b-476c-93d0-a9f2dbdd61fd", 00:11:41.814 "is_configured": true, 00:11:41.814 "data_offset": 0, 00:11:41.814 "data_size": 65536 00:11:41.814 }, 00:11:41.814 { 00:11:41.814 "name": "BaseBdev4", 00:11:41.814 "uuid": "050ac87d-19d8-4dfd-8b06-56b7db8035b6", 00:11:41.814 "is_configured": true, 00:11:41.814 "data_offset": 0, 00:11:41.814 "data_size": 65536 00:11:41.814 } 00:11:41.814 ] 00:11:41.814 }' 00:11:41.814 14:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.814 14:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.383 14:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:42.383 14:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:42.383 14:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:42.383 14:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:42.383 14:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:42.383 14:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:42.383 14:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:42.383 14:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:42.383 14:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.383 14:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.383 [2024-12-09 14:44:20.222930] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:42.383 14:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.383 14:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:42.383 "name": "Existed_Raid", 00:11:42.383 "aliases": [ 00:11:42.383 "d95b0281-ebd2-40b9-999d-e6aae61b53ec" 00:11:42.383 ], 00:11:42.383 "product_name": "Raid Volume", 00:11:42.383 "block_size": 512, 00:11:42.383 "num_blocks": 65536, 00:11:42.383 "uuid": "d95b0281-ebd2-40b9-999d-e6aae61b53ec", 00:11:42.383 "assigned_rate_limits": { 00:11:42.383 "rw_ios_per_sec": 0, 00:11:42.383 "rw_mbytes_per_sec": 0, 00:11:42.383 "r_mbytes_per_sec": 0, 00:11:42.383 "w_mbytes_per_sec": 0 00:11:42.383 }, 00:11:42.383 "claimed": false, 00:11:42.383 "zoned": false, 00:11:42.383 "supported_io_types": { 00:11:42.383 "read": true, 00:11:42.383 "write": true, 00:11:42.383 "unmap": false, 00:11:42.383 "flush": false, 00:11:42.383 "reset": true, 00:11:42.383 "nvme_admin": false, 00:11:42.383 "nvme_io": false, 00:11:42.383 "nvme_io_md": false, 00:11:42.383 "write_zeroes": true, 00:11:42.383 "zcopy": false, 00:11:42.383 "get_zone_info": false, 00:11:42.383 "zone_management": false, 00:11:42.383 "zone_append": false, 00:11:42.383 "compare": false, 00:11:42.383 "compare_and_write": false, 00:11:42.383 "abort": false, 00:11:42.383 "seek_hole": false, 00:11:42.383 "seek_data": false, 00:11:42.383 "copy": false, 00:11:42.383 "nvme_iov_md": false 00:11:42.383 }, 00:11:42.383 "memory_domains": [ 00:11:42.383 { 00:11:42.383 "dma_device_id": "system", 00:11:42.383 "dma_device_type": 1 00:11:42.383 }, 00:11:42.383 { 00:11:42.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.383 "dma_device_type": 2 00:11:42.383 }, 00:11:42.383 { 00:11:42.383 "dma_device_id": "system", 00:11:42.383 "dma_device_type": 1 00:11:42.383 }, 00:11:42.383 { 00:11:42.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.383 "dma_device_type": 2 00:11:42.383 }, 00:11:42.383 { 00:11:42.383 "dma_device_id": "system", 00:11:42.383 "dma_device_type": 1 00:11:42.383 }, 00:11:42.383 { 00:11:42.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.383 "dma_device_type": 2 00:11:42.383 }, 00:11:42.383 { 00:11:42.383 "dma_device_id": "system", 00:11:42.383 "dma_device_type": 1 00:11:42.383 }, 00:11:42.383 { 00:11:42.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.383 "dma_device_type": 2 00:11:42.383 } 00:11:42.383 ], 00:11:42.383 "driver_specific": { 00:11:42.383 "raid": { 00:11:42.383 "uuid": "d95b0281-ebd2-40b9-999d-e6aae61b53ec", 00:11:42.383 "strip_size_kb": 0, 00:11:42.383 "state": "online", 00:11:42.383 "raid_level": "raid1", 00:11:42.383 "superblock": false, 00:11:42.383 "num_base_bdevs": 4, 00:11:42.383 "num_base_bdevs_discovered": 4, 00:11:42.383 "num_base_bdevs_operational": 4, 00:11:42.383 "base_bdevs_list": [ 00:11:42.383 { 00:11:42.383 "name": "NewBaseBdev", 00:11:42.383 "uuid": "113ac9f1-c1c3-41e4-948c-8605726e5fe9", 00:11:42.383 "is_configured": true, 00:11:42.383 "data_offset": 0, 00:11:42.383 "data_size": 65536 00:11:42.383 }, 00:11:42.383 { 00:11:42.383 "name": "BaseBdev2", 00:11:42.383 "uuid": "6eeeac32-72fc-49d3-ae72-398ea7e25698", 00:11:42.383 "is_configured": true, 00:11:42.383 "data_offset": 0, 00:11:42.383 "data_size": 65536 00:11:42.383 }, 00:11:42.383 { 00:11:42.383 "name": "BaseBdev3", 00:11:42.383 "uuid": "7bc6db68-0e2b-476c-93d0-a9f2dbdd61fd", 00:11:42.383 "is_configured": true, 00:11:42.383 "data_offset": 0, 00:11:42.383 "data_size": 65536 00:11:42.383 }, 00:11:42.383 { 00:11:42.383 "name": "BaseBdev4", 00:11:42.383 "uuid": "050ac87d-19d8-4dfd-8b06-56b7db8035b6", 00:11:42.383 "is_configured": true, 00:11:42.383 "data_offset": 0, 00:11:42.383 "data_size": 65536 00:11:42.383 } 00:11:42.383 ] 00:11:42.383 } 00:11:42.383 } 00:11:42.383 }' 00:11:42.383 14:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:42.383 14:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:42.383 BaseBdev2 00:11:42.383 BaseBdev3 00:11:42.383 BaseBdev4' 00:11:42.383 14:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.383 14:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:42.383 14:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.383 14:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:42.383 14:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.383 14:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.383 14:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.383 14:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.383 14:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.383 14:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.383 14:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.384 14:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:42.384 14:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.384 14:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.384 14:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.384 14:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.384 14:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.384 14:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.384 14:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.384 14:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:42.384 14:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.384 14:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.384 14:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.384 14:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.644 14:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.644 14:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.644 14:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.644 14:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:42.644 14:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.644 14:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.644 14:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.644 14:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.644 14:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.644 14:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.644 14:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:42.644 14:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.644 14:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.644 [2024-12-09 14:44:20.565932] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:42.644 [2024-12-09 14:44:20.566007] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:42.644 [2024-12-09 14:44:20.566138] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:42.644 [2024-12-09 14:44:20.566483] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:42.644 [2024-12-09 14:44:20.566550] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:42.644 14:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.644 14:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 74489 00:11:42.644 14:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 74489 ']' 00:11:42.644 14:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 74489 00:11:42.644 14:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:42.644 14:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:42.644 14:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74489 00:11:42.644 killing process with pid 74489 00:11:42.644 14:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:42.644 14:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:42.644 14:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74489' 00:11:42.644 14:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 74489 00:11:42.644 [2024-12-09 14:44:20.613661] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:42.644 14:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 74489 00:11:43.212 [2024-12-09 14:44:21.035787] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:44.164 14:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:44.164 00:11:44.164 real 0m12.083s 00:11:44.164 user 0m19.216s 00:11:44.164 sys 0m2.139s 00:11:44.164 ************************************ 00:11:44.164 END TEST raid_state_function_test 00:11:44.164 ************************************ 00:11:44.164 14:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.164 14:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.164 14:44:22 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:11:44.164 14:44:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:44.164 14:44:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.164 14:44:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:44.164 ************************************ 00:11:44.164 START TEST raid_state_function_test_sb 00:11:44.164 ************************************ 00:11:44.164 14:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:11:44.164 14:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:44.164 14:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:44.164 14:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:44.164 14:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:44.164 14:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:44.164 14:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:44.164 14:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:44.164 14:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:44.164 14:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:44.164 14:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:44.164 14:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:44.164 14:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:44.164 14:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:44.164 14:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:44.164 14:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:44.164 14:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:44.164 14:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:44.164 14:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:44.164 14:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:44.164 14:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:44.422 14:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:44.422 14:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:44.422 14:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:44.422 14:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:44.422 14:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:44.422 14:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:44.422 14:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:44.422 14:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:44.422 Process raid pid: 75166 00:11:44.422 14:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=75166 00:11:44.422 14:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:44.422 14:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75166' 00:11:44.422 14:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 75166 00:11:44.422 14:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75166 ']' 00:11:44.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.422 14:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.422 14:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:44.422 14:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.422 14:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:44.422 14:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.422 [2024-12-09 14:44:22.370942] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:11:44.422 [2024-12-09 14:44:22.371182] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.681 [2024-12-09 14:44:22.544909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.681 [2024-12-09 14:44:22.663357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.939 [2024-12-09 14:44:22.864951] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.939 [2024-12-09 14:44:22.864993] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.312 14:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:45.312 14:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:45.312 14:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:45.312 14:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.312 14:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.312 [2024-12-09 14:44:23.235685] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:45.312 [2024-12-09 14:44:23.235741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:45.312 [2024-12-09 14:44:23.235753] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:45.312 [2024-12-09 14:44:23.235764] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:45.312 [2024-12-09 14:44:23.235771] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:45.312 [2024-12-09 14:44:23.235781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:45.312 [2024-12-09 14:44:23.235789] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:45.312 [2024-12-09 14:44:23.235798] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:45.312 14:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.312 14:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:45.312 14:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.312 14:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.312 14:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.312 14:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.312 14:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.312 14:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.312 14:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.312 14:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.312 14:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.312 14:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.312 14:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.312 14:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.312 14:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.312 14:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.312 14:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.312 "name": "Existed_Raid", 00:11:45.312 "uuid": "f6acf938-1c4e-4511-b9cf-84da9133a1ea", 00:11:45.312 "strip_size_kb": 0, 00:11:45.312 "state": "configuring", 00:11:45.312 "raid_level": "raid1", 00:11:45.312 "superblock": true, 00:11:45.312 "num_base_bdevs": 4, 00:11:45.312 "num_base_bdevs_discovered": 0, 00:11:45.312 "num_base_bdevs_operational": 4, 00:11:45.313 "base_bdevs_list": [ 00:11:45.313 { 00:11:45.313 "name": "BaseBdev1", 00:11:45.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.313 "is_configured": false, 00:11:45.313 "data_offset": 0, 00:11:45.313 "data_size": 0 00:11:45.313 }, 00:11:45.313 { 00:11:45.313 "name": "BaseBdev2", 00:11:45.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.313 "is_configured": false, 00:11:45.313 "data_offset": 0, 00:11:45.313 "data_size": 0 00:11:45.313 }, 00:11:45.313 { 00:11:45.313 "name": "BaseBdev3", 00:11:45.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.313 "is_configured": false, 00:11:45.313 "data_offset": 0, 00:11:45.313 "data_size": 0 00:11:45.313 }, 00:11:45.313 { 00:11:45.313 "name": "BaseBdev4", 00:11:45.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.313 "is_configured": false, 00:11:45.313 "data_offset": 0, 00:11:45.313 "data_size": 0 00:11:45.313 } 00:11:45.313 ] 00:11:45.313 }' 00:11:45.313 14:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.313 14:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.590 14:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:45.590 14:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.590 14:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.590 [2024-12-09 14:44:23.706925] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:45.590 [2024-12-09 14:44:23.707079] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:45.590 14:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.850 14:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:45.850 14:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.850 14:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.850 [2024-12-09 14:44:23.718914] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:45.850 [2024-12-09 14:44:23.719018] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:45.850 [2024-12-09 14:44:23.719055] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:45.850 [2024-12-09 14:44:23.719083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:45.850 [2024-12-09 14:44:23.719113] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:45.850 [2024-12-09 14:44:23.719137] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:45.850 [2024-12-09 14:44:23.719158] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:45.850 [2024-12-09 14:44:23.719192] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:45.850 14:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.850 14:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:45.850 14:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.850 14:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.850 [2024-12-09 14:44:23.768523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:45.850 BaseBdev1 00:11:45.850 14:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.850 14:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:45.850 14:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:45.850 14:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:45.850 14:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:45.850 14:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:45.850 14:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:45.850 14:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:45.850 14:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.850 14:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.850 14:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.850 14:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:45.850 14:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.850 14:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.850 [ 00:11:45.850 { 00:11:45.850 "name": "BaseBdev1", 00:11:45.850 "aliases": [ 00:11:45.850 "252fd249-c894-44c2-b54f-9dfa2ed3f546" 00:11:45.850 ], 00:11:45.850 "product_name": "Malloc disk", 00:11:45.850 "block_size": 512, 00:11:45.850 "num_blocks": 65536, 00:11:45.850 "uuid": "252fd249-c894-44c2-b54f-9dfa2ed3f546", 00:11:45.850 "assigned_rate_limits": { 00:11:45.850 "rw_ios_per_sec": 0, 00:11:45.850 "rw_mbytes_per_sec": 0, 00:11:45.850 "r_mbytes_per_sec": 0, 00:11:45.850 "w_mbytes_per_sec": 0 00:11:45.850 }, 00:11:45.850 "claimed": true, 00:11:45.850 "claim_type": "exclusive_write", 00:11:45.850 "zoned": false, 00:11:45.850 "supported_io_types": { 00:11:45.850 "read": true, 00:11:45.850 "write": true, 00:11:45.850 "unmap": true, 00:11:45.850 "flush": true, 00:11:45.850 "reset": true, 00:11:45.850 "nvme_admin": false, 00:11:45.850 "nvme_io": false, 00:11:45.850 "nvme_io_md": false, 00:11:45.850 "write_zeroes": true, 00:11:45.850 "zcopy": true, 00:11:45.850 "get_zone_info": false, 00:11:45.850 "zone_management": false, 00:11:45.850 "zone_append": false, 00:11:45.850 "compare": false, 00:11:45.850 "compare_and_write": false, 00:11:45.850 "abort": true, 00:11:45.850 "seek_hole": false, 00:11:45.850 "seek_data": false, 00:11:45.850 "copy": true, 00:11:45.850 "nvme_iov_md": false 00:11:45.850 }, 00:11:45.850 "memory_domains": [ 00:11:45.850 { 00:11:45.850 "dma_device_id": "system", 00:11:45.850 "dma_device_type": 1 00:11:45.850 }, 00:11:45.850 { 00:11:45.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.850 "dma_device_type": 2 00:11:45.850 } 00:11:45.850 ], 00:11:45.850 "driver_specific": {} 00:11:45.850 } 00:11:45.850 ] 00:11:45.850 14:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.850 14:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:45.850 14:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:45.850 14:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.850 14:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.850 14:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.850 14:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.850 14:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.850 14:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.850 14:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.850 14:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.850 14:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.850 14:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.850 14:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.850 14:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.850 14:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.850 14:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.850 14:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.850 "name": "Existed_Raid", 00:11:45.850 "uuid": "110a10e3-5276-4605-9de1-9a8aa472e3ac", 00:11:45.850 "strip_size_kb": 0, 00:11:45.850 "state": "configuring", 00:11:45.850 "raid_level": "raid1", 00:11:45.850 "superblock": true, 00:11:45.850 "num_base_bdevs": 4, 00:11:45.850 "num_base_bdevs_discovered": 1, 00:11:45.850 "num_base_bdevs_operational": 4, 00:11:45.850 "base_bdevs_list": [ 00:11:45.850 { 00:11:45.850 "name": "BaseBdev1", 00:11:45.850 "uuid": "252fd249-c894-44c2-b54f-9dfa2ed3f546", 00:11:45.850 "is_configured": true, 00:11:45.850 "data_offset": 2048, 00:11:45.850 "data_size": 63488 00:11:45.850 }, 00:11:45.850 { 00:11:45.850 "name": "BaseBdev2", 00:11:45.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.850 "is_configured": false, 00:11:45.850 "data_offset": 0, 00:11:45.850 "data_size": 0 00:11:45.850 }, 00:11:45.850 { 00:11:45.850 "name": "BaseBdev3", 00:11:45.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.850 "is_configured": false, 00:11:45.850 "data_offset": 0, 00:11:45.850 "data_size": 0 00:11:45.850 }, 00:11:45.850 { 00:11:45.850 "name": "BaseBdev4", 00:11:45.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.850 "is_configured": false, 00:11:45.850 "data_offset": 0, 00:11:45.850 "data_size": 0 00:11:45.850 } 00:11:45.850 ] 00:11:45.850 }' 00:11:45.850 14:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.850 14:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.419 14:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:46.419 14:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.419 14:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.419 [2024-12-09 14:44:24.295700] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:46.419 [2024-12-09 14:44:24.295763] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:46.419 14:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.419 14:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:46.419 14:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.419 14:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.419 [2024-12-09 14:44:24.307733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:46.420 [2024-12-09 14:44:24.309794] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:46.420 [2024-12-09 14:44:24.309844] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:46.420 [2024-12-09 14:44:24.309855] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:46.420 [2024-12-09 14:44:24.309868] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:46.420 [2024-12-09 14:44:24.309876] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:46.420 [2024-12-09 14:44:24.309885] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:46.420 14:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.420 14:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:46.420 14:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:46.420 14:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:46.420 14:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.420 14:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.420 14:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.420 14:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.420 14:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.420 14:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.420 14:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.420 14:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.420 14:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.420 14:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.420 14:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.420 14:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.420 14:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.420 14:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.420 14:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.420 "name": "Existed_Raid", 00:11:46.420 "uuid": "57a2e551-a65e-4804-951d-a106fbeab465", 00:11:46.420 "strip_size_kb": 0, 00:11:46.420 "state": "configuring", 00:11:46.420 "raid_level": "raid1", 00:11:46.420 "superblock": true, 00:11:46.420 "num_base_bdevs": 4, 00:11:46.420 "num_base_bdevs_discovered": 1, 00:11:46.420 "num_base_bdevs_operational": 4, 00:11:46.420 "base_bdevs_list": [ 00:11:46.420 { 00:11:46.420 "name": "BaseBdev1", 00:11:46.420 "uuid": "252fd249-c894-44c2-b54f-9dfa2ed3f546", 00:11:46.420 "is_configured": true, 00:11:46.420 "data_offset": 2048, 00:11:46.420 "data_size": 63488 00:11:46.420 }, 00:11:46.420 { 00:11:46.420 "name": "BaseBdev2", 00:11:46.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.420 "is_configured": false, 00:11:46.420 "data_offset": 0, 00:11:46.420 "data_size": 0 00:11:46.420 }, 00:11:46.420 { 00:11:46.420 "name": "BaseBdev3", 00:11:46.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.420 "is_configured": false, 00:11:46.420 "data_offset": 0, 00:11:46.420 "data_size": 0 00:11:46.420 }, 00:11:46.420 { 00:11:46.420 "name": "BaseBdev4", 00:11:46.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.420 "is_configured": false, 00:11:46.420 "data_offset": 0, 00:11:46.420 "data_size": 0 00:11:46.420 } 00:11:46.420 ] 00:11:46.420 }' 00:11:46.420 14:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.420 14:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.679 14:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:46.679 14:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.679 14:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.938 [2024-12-09 14:44:24.812414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:46.938 BaseBdev2 00:11:46.938 14:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.938 14:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:46.938 14:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:46.938 14:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:46.938 14:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:46.938 14:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:46.938 14:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:46.938 14:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:46.938 14:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.938 14:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.938 14:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.938 14:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:46.938 14:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.938 14:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.938 [ 00:11:46.938 { 00:11:46.938 "name": "BaseBdev2", 00:11:46.938 "aliases": [ 00:11:46.938 "298e8f71-7da4-44b2-8cdd-884b7ea281ba" 00:11:46.938 ], 00:11:46.938 "product_name": "Malloc disk", 00:11:46.938 "block_size": 512, 00:11:46.938 "num_blocks": 65536, 00:11:46.938 "uuid": "298e8f71-7da4-44b2-8cdd-884b7ea281ba", 00:11:46.938 "assigned_rate_limits": { 00:11:46.938 "rw_ios_per_sec": 0, 00:11:46.938 "rw_mbytes_per_sec": 0, 00:11:46.938 "r_mbytes_per_sec": 0, 00:11:46.938 "w_mbytes_per_sec": 0 00:11:46.938 }, 00:11:46.938 "claimed": true, 00:11:46.938 "claim_type": "exclusive_write", 00:11:46.938 "zoned": false, 00:11:46.938 "supported_io_types": { 00:11:46.938 "read": true, 00:11:46.938 "write": true, 00:11:46.938 "unmap": true, 00:11:46.938 "flush": true, 00:11:46.938 "reset": true, 00:11:46.938 "nvme_admin": false, 00:11:46.938 "nvme_io": false, 00:11:46.938 "nvme_io_md": false, 00:11:46.938 "write_zeroes": true, 00:11:46.938 "zcopy": true, 00:11:46.938 "get_zone_info": false, 00:11:46.938 "zone_management": false, 00:11:46.938 "zone_append": false, 00:11:46.938 "compare": false, 00:11:46.938 "compare_and_write": false, 00:11:46.938 "abort": true, 00:11:46.938 "seek_hole": false, 00:11:46.938 "seek_data": false, 00:11:46.938 "copy": true, 00:11:46.938 "nvme_iov_md": false 00:11:46.938 }, 00:11:46.938 "memory_domains": [ 00:11:46.938 { 00:11:46.938 "dma_device_id": "system", 00:11:46.938 "dma_device_type": 1 00:11:46.938 }, 00:11:46.938 { 00:11:46.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.938 "dma_device_type": 2 00:11:46.938 } 00:11:46.938 ], 00:11:46.938 "driver_specific": {} 00:11:46.938 } 00:11:46.938 ] 00:11:46.938 14:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.938 14:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:46.938 14:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:46.938 14:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:46.938 14:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:46.938 14:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.938 14:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.939 14:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.939 14:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.939 14:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.939 14:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.939 14:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.939 14:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.939 14:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.939 14:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.939 14:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.939 14:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.939 14:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.939 14:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.939 14:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.939 "name": "Existed_Raid", 00:11:46.939 "uuid": "57a2e551-a65e-4804-951d-a106fbeab465", 00:11:46.939 "strip_size_kb": 0, 00:11:46.939 "state": "configuring", 00:11:46.939 "raid_level": "raid1", 00:11:46.939 "superblock": true, 00:11:46.939 "num_base_bdevs": 4, 00:11:46.939 "num_base_bdevs_discovered": 2, 00:11:46.939 "num_base_bdevs_operational": 4, 00:11:46.939 "base_bdevs_list": [ 00:11:46.939 { 00:11:46.939 "name": "BaseBdev1", 00:11:46.939 "uuid": "252fd249-c894-44c2-b54f-9dfa2ed3f546", 00:11:46.939 "is_configured": true, 00:11:46.939 "data_offset": 2048, 00:11:46.939 "data_size": 63488 00:11:46.939 }, 00:11:46.939 { 00:11:46.939 "name": "BaseBdev2", 00:11:46.939 "uuid": "298e8f71-7da4-44b2-8cdd-884b7ea281ba", 00:11:46.939 "is_configured": true, 00:11:46.939 "data_offset": 2048, 00:11:46.939 "data_size": 63488 00:11:46.939 }, 00:11:46.939 { 00:11:46.939 "name": "BaseBdev3", 00:11:46.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.939 "is_configured": false, 00:11:46.939 "data_offset": 0, 00:11:46.939 "data_size": 0 00:11:46.939 }, 00:11:46.939 { 00:11:46.939 "name": "BaseBdev4", 00:11:46.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.939 "is_configured": false, 00:11:46.939 "data_offset": 0, 00:11:46.939 "data_size": 0 00:11:46.939 } 00:11:46.939 ] 00:11:46.939 }' 00:11:46.939 14:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.939 14:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.505 14:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:47.505 14:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.505 14:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.505 [2024-12-09 14:44:25.398140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:47.505 BaseBdev3 00:11:47.505 14:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.505 14:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:47.505 14:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:47.505 14:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:47.505 14:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:47.505 14:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:47.505 14:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:47.505 14:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:47.505 14:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.505 14:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.505 14:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.505 14:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:47.505 14:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.505 14:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.505 [ 00:11:47.505 { 00:11:47.505 "name": "BaseBdev3", 00:11:47.505 "aliases": [ 00:11:47.505 "72c160e9-3ee4-4224-8f24-91c6bbf423f6" 00:11:47.505 ], 00:11:47.505 "product_name": "Malloc disk", 00:11:47.505 "block_size": 512, 00:11:47.505 "num_blocks": 65536, 00:11:47.505 "uuid": "72c160e9-3ee4-4224-8f24-91c6bbf423f6", 00:11:47.505 "assigned_rate_limits": { 00:11:47.505 "rw_ios_per_sec": 0, 00:11:47.505 "rw_mbytes_per_sec": 0, 00:11:47.505 "r_mbytes_per_sec": 0, 00:11:47.505 "w_mbytes_per_sec": 0 00:11:47.505 }, 00:11:47.505 "claimed": true, 00:11:47.505 "claim_type": "exclusive_write", 00:11:47.505 "zoned": false, 00:11:47.505 "supported_io_types": { 00:11:47.505 "read": true, 00:11:47.505 "write": true, 00:11:47.505 "unmap": true, 00:11:47.505 "flush": true, 00:11:47.505 "reset": true, 00:11:47.505 "nvme_admin": false, 00:11:47.505 "nvme_io": false, 00:11:47.505 "nvme_io_md": false, 00:11:47.505 "write_zeroes": true, 00:11:47.505 "zcopy": true, 00:11:47.505 "get_zone_info": false, 00:11:47.505 "zone_management": false, 00:11:47.505 "zone_append": false, 00:11:47.505 "compare": false, 00:11:47.505 "compare_and_write": false, 00:11:47.505 "abort": true, 00:11:47.505 "seek_hole": false, 00:11:47.505 "seek_data": false, 00:11:47.505 "copy": true, 00:11:47.505 "nvme_iov_md": false 00:11:47.505 }, 00:11:47.505 "memory_domains": [ 00:11:47.505 { 00:11:47.505 "dma_device_id": "system", 00:11:47.505 "dma_device_type": 1 00:11:47.505 }, 00:11:47.505 { 00:11:47.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.505 "dma_device_type": 2 00:11:47.505 } 00:11:47.505 ], 00:11:47.505 "driver_specific": {} 00:11:47.505 } 00:11:47.505 ] 00:11:47.505 14:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.505 14:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:47.505 14:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:47.505 14:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:47.505 14:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:47.505 14:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.505 14:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.505 14:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.505 14:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.505 14:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.505 14:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.505 14:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.505 14:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.505 14:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.505 14:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.505 14:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.505 14:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.505 14:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.505 14:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.505 14:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.505 "name": "Existed_Raid", 00:11:47.505 "uuid": "57a2e551-a65e-4804-951d-a106fbeab465", 00:11:47.505 "strip_size_kb": 0, 00:11:47.505 "state": "configuring", 00:11:47.505 "raid_level": "raid1", 00:11:47.505 "superblock": true, 00:11:47.505 "num_base_bdevs": 4, 00:11:47.505 "num_base_bdevs_discovered": 3, 00:11:47.505 "num_base_bdevs_operational": 4, 00:11:47.505 "base_bdevs_list": [ 00:11:47.505 { 00:11:47.505 "name": "BaseBdev1", 00:11:47.505 "uuid": "252fd249-c894-44c2-b54f-9dfa2ed3f546", 00:11:47.505 "is_configured": true, 00:11:47.505 "data_offset": 2048, 00:11:47.505 "data_size": 63488 00:11:47.505 }, 00:11:47.505 { 00:11:47.505 "name": "BaseBdev2", 00:11:47.505 "uuid": "298e8f71-7da4-44b2-8cdd-884b7ea281ba", 00:11:47.505 "is_configured": true, 00:11:47.505 "data_offset": 2048, 00:11:47.505 "data_size": 63488 00:11:47.505 }, 00:11:47.505 { 00:11:47.505 "name": "BaseBdev3", 00:11:47.505 "uuid": "72c160e9-3ee4-4224-8f24-91c6bbf423f6", 00:11:47.505 "is_configured": true, 00:11:47.505 "data_offset": 2048, 00:11:47.505 "data_size": 63488 00:11:47.505 }, 00:11:47.505 { 00:11:47.505 "name": "BaseBdev4", 00:11:47.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.505 "is_configured": false, 00:11:47.505 "data_offset": 0, 00:11:47.505 "data_size": 0 00:11:47.505 } 00:11:47.505 ] 00:11:47.505 }' 00:11:47.505 14:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.505 14:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.764 14:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:47.764 14:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.764 14:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.024 [2024-12-09 14:44:25.921953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:48.024 [2024-12-09 14:44:25.922383] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:48.024 [2024-12-09 14:44:25.922440] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:48.024 [2024-12-09 14:44:25.922749] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:48.024 [2024-12-09 14:44:25.922981] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:48.024 [2024-12-09 14:44:25.923040] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:48.024 BaseBdev4 00:11:48.024 [2024-12-09 14:44:25.923252] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.024 14:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.024 14:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:48.024 14:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:48.024 14:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:48.024 14:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:48.024 14:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:48.024 14:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:48.024 14:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:48.024 14:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.024 14:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.024 14:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.024 14:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:48.024 14:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.024 14:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.024 [ 00:11:48.024 { 00:11:48.024 "name": "BaseBdev4", 00:11:48.024 "aliases": [ 00:11:48.024 "a5b1a61e-bed7-4963-881c-4ea1fef98637" 00:11:48.024 ], 00:11:48.024 "product_name": "Malloc disk", 00:11:48.024 "block_size": 512, 00:11:48.024 "num_blocks": 65536, 00:11:48.024 "uuid": "a5b1a61e-bed7-4963-881c-4ea1fef98637", 00:11:48.024 "assigned_rate_limits": { 00:11:48.024 "rw_ios_per_sec": 0, 00:11:48.024 "rw_mbytes_per_sec": 0, 00:11:48.024 "r_mbytes_per_sec": 0, 00:11:48.024 "w_mbytes_per_sec": 0 00:11:48.024 }, 00:11:48.024 "claimed": true, 00:11:48.024 "claim_type": "exclusive_write", 00:11:48.024 "zoned": false, 00:11:48.024 "supported_io_types": { 00:11:48.024 "read": true, 00:11:48.024 "write": true, 00:11:48.024 "unmap": true, 00:11:48.024 "flush": true, 00:11:48.024 "reset": true, 00:11:48.024 "nvme_admin": false, 00:11:48.024 "nvme_io": false, 00:11:48.024 "nvme_io_md": false, 00:11:48.024 "write_zeroes": true, 00:11:48.024 "zcopy": true, 00:11:48.024 "get_zone_info": false, 00:11:48.024 "zone_management": false, 00:11:48.024 "zone_append": false, 00:11:48.024 "compare": false, 00:11:48.024 "compare_and_write": false, 00:11:48.024 "abort": true, 00:11:48.024 "seek_hole": false, 00:11:48.024 "seek_data": false, 00:11:48.024 "copy": true, 00:11:48.024 "nvme_iov_md": false 00:11:48.024 }, 00:11:48.024 "memory_domains": [ 00:11:48.024 { 00:11:48.024 "dma_device_id": "system", 00:11:48.024 "dma_device_type": 1 00:11:48.024 }, 00:11:48.024 { 00:11:48.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.024 "dma_device_type": 2 00:11:48.024 } 00:11:48.024 ], 00:11:48.024 "driver_specific": {} 00:11:48.024 } 00:11:48.024 ] 00:11:48.024 14:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.024 14:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:48.024 14:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:48.024 14:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:48.024 14:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:48.024 14:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.024 14:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.024 14:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.024 14:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.024 14:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.024 14:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.024 14:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.024 14:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.024 14:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.024 14:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.024 14:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.024 14:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.024 14:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.024 14:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.024 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.024 "name": "Existed_Raid", 00:11:48.024 "uuid": "57a2e551-a65e-4804-951d-a106fbeab465", 00:11:48.024 "strip_size_kb": 0, 00:11:48.024 "state": "online", 00:11:48.024 "raid_level": "raid1", 00:11:48.024 "superblock": true, 00:11:48.024 "num_base_bdevs": 4, 00:11:48.025 "num_base_bdevs_discovered": 4, 00:11:48.025 "num_base_bdevs_operational": 4, 00:11:48.025 "base_bdevs_list": [ 00:11:48.025 { 00:11:48.025 "name": "BaseBdev1", 00:11:48.025 "uuid": "252fd249-c894-44c2-b54f-9dfa2ed3f546", 00:11:48.025 "is_configured": true, 00:11:48.025 "data_offset": 2048, 00:11:48.025 "data_size": 63488 00:11:48.025 }, 00:11:48.025 { 00:11:48.025 "name": "BaseBdev2", 00:11:48.025 "uuid": "298e8f71-7da4-44b2-8cdd-884b7ea281ba", 00:11:48.025 "is_configured": true, 00:11:48.025 "data_offset": 2048, 00:11:48.025 "data_size": 63488 00:11:48.025 }, 00:11:48.025 { 00:11:48.025 "name": "BaseBdev3", 00:11:48.025 "uuid": "72c160e9-3ee4-4224-8f24-91c6bbf423f6", 00:11:48.025 "is_configured": true, 00:11:48.025 "data_offset": 2048, 00:11:48.025 "data_size": 63488 00:11:48.025 }, 00:11:48.025 { 00:11:48.025 "name": "BaseBdev4", 00:11:48.025 "uuid": "a5b1a61e-bed7-4963-881c-4ea1fef98637", 00:11:48.025 "is_configured": true, 00:11:48.025 "data_offset": 2048, 00:11:48.025 "data_size": 63488 00:11:48.025 } 00:11:48.025 ] 00:11:48.025 }' 00:11:48.025 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.025 14:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.284 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:48.284 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:48.284 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:48.284 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:48.284 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:48.284 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:48.284 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:48.284 14:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.284 14:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.284 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:48.284 [2024-12-09 14:44:26.405558] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:48.544 14:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.544 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:48.544 "name": "Existed_Raid", 00:11:48.544 "aliases": [ 00:11:48.544 "57a2e551-a65e-4804-951d-a106fbeab465" 00:11:48.544 ], 00:11:48.544 "product_name": "Raid Volume", 00:11:48.544 "block_size": 512, 00:11:48.544 "num_blocks": 63488, 00:11:48.544 "uuid": "57a2e551-a65e-4804-951d-a106fbeab465", 00:11:48.544 "assigned_rate_limits": { 00:11:48.544 "rw_ios_per_sec": 0, 00:11:48.544 "rw_mbytes_per_sec": 0, 00:11:48.544 "r_mbytes_per_sec": 0, 00:11:48.544 "w_mbytes_per_sec": 0 00:11:48.544 }, 00:11:48.544 "claimed": false, 00:11:48.544 "zoned": false, 00:11:48.544 "supported_io_types": { 00:11:48.544 "read": true, 00:11:48.544 "write": true, 00:11:48.544 "unmap": false, 00:11:48.544 "flush": false, 00:11:48.544 "reset": true, 00:11:48.544 "nvme_admin": false, 00:11:48.544 "nvme_io": false, 00:11:48.544 "nvme_io_md": false, 00:11:48.544 "write_zeroes": true, 00:11:48.544 "zcopy": false, 00:11:48.544 "get_zone_info": false, 00:11:48.544 "zone_management": false, 00:11:48.544 "zone_append": false, 00:11:48.544 "compare": false, 00:11:48.544 "compare_and_write": false, 00:11:48.544 "abort": false, 00:11:48.544 "seek_hole": false, 00:11:48.544 "seek_data": false, 00:11:48.544 "copy": false, 00:11:48.544 "nvme_iov_md": false 00:11:48.544 }, 00:11:48.544 "memory_domains": [ 00:11:48.544 { 00:11:48.544 "dma_device_id": "system", 00:11:48.544 "dma_device_type": 1 00:11:48.544 }, 00:11:48.544 { 00:11:48.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.544 "dma_device_type": 2 00:11:48.544 }, 00:11:48.544 { 00:11:48.544 "dma_device_id": "system", 00:11:48.544 "dma_device_type": 1 00:11:48.544 }, 00:11:48.544 { 00:11:48.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.544 "dma_device_type": 2 00:11:48.544 }, 00:11:48.544 { 00:11:48.544 "dma_device_id": "system", 00:11:48.544 "dma_device_type": 1 00:11:48.544 }, 00:11:48.544 { 00:11:48.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.544 "dma_device_type": 2 00:11:48.544 }, 00:11:48.544 { 00:11:48.544 "dma_device_id": "system", 00:11:48.544 "dma_device_type": 1 00:11:48.544 }, 00:11:48.544 { 00:11:48.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.544 "dma_device_type": 2 00:11:48.544 } 00:11:48.544 ], 00:11:48.544 "driver_specific": { 00:11:48.544 "raid": { 00:11:48.544 "uuid": "57a2e551-a65e-4804-951d-a106fbeab465", 00:11:48.544 "strip_size_kb": 0, 00:11:48.544 "state": "online", 00:11:48.544 "raid_level": "raid1", 00:11:48.544 "superblock": true, 00:11:48.544 "num_base_bdevs": 4, 00:11:48.544 "num_base_bdevs_discovered": 4, 00:11:48.544 "num_base_bdevs_operational": 4, 00:11:48.544 "base_bdevs_list": [ 00:11:48.544 { 00:11:48.544 "name": "BaseBdev1", 00:11:48.544 "uuid": "252fd249-c894-44c2-b54f-9dfa2ed3f546", 00:11:48.544 "is_configured": true, 00:11:48.544 "data_offset": 2048, 00:11:48.544 "data_size": 63488 00:11:48.544 }, 00:11:48.544 { 00:11:48.544 "name": "BaseBdev2", 00:11:48.544 "uuid": "298e8f71-7da4-44b2-8cdd-884b7ea281ba", 00:11:48.544 "is_configured": true, 00:11:48.544 "data_offset": 2048, 00:11:48.544 "data_size": 63488 00:11:48.544 }, 00:11:48.544 { 00:11:48.544 "name": "BaseBdev3", 00:11:48.544 "uuid": "72c160e9-3ee4-4224-8f24-91c6bbf423f6", 00:11:48.544 "is_configured": true, 00:11:48.544 "data_offset": 2048, 00:11:48.544 "data_size": 63488 00:11:48.544 }, 00:11:48.544 { 00:11:48.544 "name": "BaseBdev4", 00:11:48.544 "uuid": "a5b1a61e-bed7-4963-881c-4ea1fef98637", 00:11:48.544 "is_configured": true, 00:11:48.544 "data_offset": 2048, 00:11:48.544 "data_size": 63488 00:11:48.544 } 00:11:48.544 ] 00:11:48.544 } 00:11:48.544 } 00:11:48.544 }' 00:11:48.544 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:48.544 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:48.544 BaseBdev2 00:11:48.544 BaseBdev3 00:11:48.544 BaseBdev4' 00:11:48.544 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.544 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:48.544 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.544 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.544 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:48.544 14:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.544 14:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.544 14:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.544 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.544 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.544 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.544 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.544 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:48.544 14:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.544 14:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.544 14:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.544 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.544 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.544 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.544 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.544 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:48.544 14:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.544 14:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.544 14:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.804 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.804 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.804 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.804 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:48.804 14:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.804 14:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.804 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.804 14:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.804 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.804 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.804 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:48.804 14:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.804 14:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.804 [2024-12-09 14:44:26.728814] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:48.804 14:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.804 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:48.804 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:48.804 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:48.804 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:48.804 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:48.804 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:48.804 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.804 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.804 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.804 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.804 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:48.804 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.804 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.804 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.804 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.804 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.804 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.805 14:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.805 14:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.805 14:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.805 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.805 "name": "Existed_Raid", 00:11:48.805 "uuid": "57a2e551-a65e-4804-951d-a106fbeab465", 00:11:48.805 "strip_size_kb": 0, 00:11:48.805 "state": "online", 00:11:48.805 "raid_level": "raid1", 00:11:48.805 "superblock": true, 00:11:48.805 "num_base_bdevs": 4, 00:11:48.805 "num_base_bdevs_discovered": 3, 00:11:48.805 "num_base_bdevs_operational": 3, 00:11:48.805 "base_bdevs_list": [ 00:11:48.805 { 00:11:48.805 "name": null, 00:11:48.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.805 "is_configured": false, 00:11:48.805 "data_offset": 0, 00:11:48.805 "data_size": 63488 00:11:48.805 }, 00:11:48.805 { 00:11:48.805 "name": "BaseBdev2", 00:11:48.805 "uuid": "298e8f71-7da4-44b2-8cdd-884b7ea281ba", 00:11:48.805 "is_configured": true, 00:11:48.805 "data_offset": 2048, 00:11:48.805 "data_size": 63488 00:11:48.805 }, 00:11:48.805 { 00:11:48.805 "name": "BaseBdev3", 00:11:48.805 "uuid": "72c160e9-3ee4-4224-8f24-91c6bbf423f6", 00:11:48.805 "is_configured": true, 00:11:48.805 "data_offset": 2048, 00:11:48.805 "data_size": 63488 00:11:48.805 }, 00:11:48.805 { 00:11:48.805 "name": "BaseBdev4", 00:11:48.805 "uuid": "a5b1a61e-bed7-4963-881c-4ea1fef98637", 00:11:48.805 "is_configured": true, 00:11:48.805 "data_offset": 2048, 00:11:48.805 "data_size": 63488 00:11:48.805 } 00:11:48.805 ] 00:11:48.805 }' 00:11:48.805 14:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.805 14:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.373 14:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:49.373 14:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:49.373 14:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.373 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.373 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.373 14:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:49.373 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.373 14:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:49.373 14:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:49.373 14:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:49.373 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.373 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.373 [2024-12-09 14:44:27.317800] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:49.373 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.374 14:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:49.374 14:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:49.374 14:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:49.374 14:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.374 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.374 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.374 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.374 14:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:49.374 14:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:49.374 14:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:49.374 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.374 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.374 [2024-12-09 14:44:27.483168] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:49.633 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.633 14:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:49.633 14:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:49.633 14:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:49.633 14:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.633 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.633 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.633 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.633 14:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:49.633 14:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:49.633 14:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:49.633 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.633 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.633 [2024-12-09 14:44:27.636874] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:49.633 [2024-12-09 14:44:27.637058] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:49.633 [2024-12-09 14:44:27.739012] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:49.633 [2024-12-09 14:44:27.739098] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:49.633 [2024-12-09 14:44:27.739110] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:49.633 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.633 14:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:49.633 14:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:49.633 14:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.633 14:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:49.633 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.633 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.892 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.892 14:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:49.892 14:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:49.892 14:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:49.892 14:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:49.892 14:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:49.892 14:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:49.892 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.892 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.892 BaseBdev2 00:11:49.892 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.892 14:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:49.892 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:49.892 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:49.892 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:49.892 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:49.892 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:49.892 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:49.892 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.892 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.892 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.892 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:49.892 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.892 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.892 [ 00:11:49.892 { 00:11:49.892 "name": "BaseBdev2", 00:11:49.892 "aliases": [ 00:11:49.892 "a6069c2e-481f-4876-b481-7eb43a9dd9a6" 00:11:49.892 ], 00:11:49.892 "product_name": "Malloc disk", 00:11:49.892 "block_size": 512, 00:11:49.892 "num_blocks": 65536, 00:11:49.892 "uuid": "a6069c2e-481f-4876-b481-7eb43a9dd9a6", 00:11:49.892 "assigned_rate_limits": { 00:11:49.892 "rw_ios_per_sec": 0, 00:11:49.892 "rw_mbytes_per_sec": 0, 00:11:49.892 "r_mbytes_per_sec": 0, 00:11:49.892 "w_mbytes_per_sec": 0 00:11:49.892 }, 00:11:49.892 "claimed": false, 00:11:49.892 "zoned": false, 00:11:49.892 "supported_io_types": { 00:11:49.892 "read": true, 00:11:49.892 "write": true, 00:11:49.892 "unmap": true, 00:11:49.892 "flush": true, 00:11:49.892 "reset": true, 00:11:49.892 "nvme_admin": false, 00:11:49.892 "nvme_io": false, 00:11:49.892 "nvme_io_md": false, 00:11:49.892 "write_zeroes": true, 00:11:49.892 "zcopy": true, 00:11:49.892 "get_zone_info": false, 00:11:49.892 "zone_management": false, 00:11:49.892 "zone_append": false, 00:11:49.892 "compare": false, 00:11:49.892 "compare_and_write": false, 00:11:49.892 "abort": true, 00:11:49.892 "seek_hole": false, 00:11:49.892 "seek_data": false, 00:11:49.892 "copy": true, 00:11:49.892 "nvme_iov_md": false 00:11:49.892 }, 00:11:49.892 "memory_domains": [ 00:11:49.892 { 00:11:49.892 "dma_device_id": "system", 00:11:49.892 "dma_device_type": 1 00:11:49.892 }, 00:11:49.892 { 00:11:49.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.892 "dma_device_type": 2 00:11:49.892 } 00:11:49.892 ], 00:11:49.892 "driver_specific": {} 00:11:49.892 } 00:11:49.892 ] 00:11:49.892 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.892 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:49.892 14:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:49.892 14:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:49.892 14:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:49.892 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.892 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.892 BaseBdev3 00:11:49.892 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.892 14:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:49.892 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:49.892 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:49.892 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:49.892 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:49.893 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:49.893 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:49.893 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.893 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.893 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.893 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:49.893 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.893 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.893 [ 00:11:49.893 { 00:11:49.893 "name": "BaseBdev3", 00:11:49.893 "aliases": [ 00:11:49.893 "7dd82821-d44e-483e-ae8b-529d64a9c27d" 00:11:49.893 ], 00:11:49.893 "product_name": "Malloc disk", 00:11:49.893 "block_size": 512, 00:11:49.893 "num_blocks": 65536, 00:11:49.893 "uuid": "7dd82821-d44e-483e-ae8b-529d64a9c27d", 00:11:49.893 "assigned_rate_limits": { 00:11:49.893 "rw_ios_per_sec": 0, 00:11:49.893 "rw_mbytes_per_sec": 0, 00:11:49.893 "r_mbytes_per_sec": 0, 00:11:49.893 "w_mbytes_per_sec": 0 00:11:49.893 }, 00:11:49.893 "claimed": false, 00:11:49.893 "zoned": false, 00:11:49.893 "supported_io_types": { 00:11:49.893 "read": true, 00:11:49.893 "write": true, 00:11:49.893 "unmap": true, 00:11:49.893 "flush": true, 00:11:49.893 "reset": true, 00:11:49.893 "nvme_admin": false, 00:11:49.893 "nvme_io": false, 00:11:49.893 "nvme_io_md": false, 00:11:49.893 "write_zeroes": true, 00:11:49.893 "zcopy": true, 00:11:49.893 "get_zone_info": false, 00:11:49.893 "zone_management": false, 00:11:49.893 "zone_append": false, 00:11:49.893 "compare": false, 00:11:49.893 "compare_and_write": false, 00:11:49.893 "abort": true, 00:11:49.893 "seek_hole": false, 00:11:49.893 "seek_data": false, 00:11:49.893 "copy": true, 00:11:49.893 "nvme_iov_md": false 00:11:49.893 }, 00:11:49.893 "memory_domains": [ 00:11:49.893 { 00:11:49.893 "dma_device_id": "system", 00:11:49.893 "dma_device_type": 1 00:11:49.893 }, 00:11:49.893 { 00:11:49.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.893 "dma_device_type": 2 00:11:49.893 } 00:11:49.893 ], 00:11:49.893 "driver_specific": {} 00:11:49.893 } 00:11:49.893 ] 00:11:49.893 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.893 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:49.893 14:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:49.893 14:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:49.893 14:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:49.893 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.893 14:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.893 BaseBdev4 00:11:49.893 14:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.893 14:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:49.893 14:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:49.893 14:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:49.893 14:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:49.893 14:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:49.893 14:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:49.893 14:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:49.893 14:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.893 14:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.152 14:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.152 14:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:50.152 14:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.152 14:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.152 [ 00:11:50.152 { 00:11:50.152 "name": "BaseBdev4", 00:11:50.152 "aliases": [ 00:11:50.152 "61c96a9b-ff7a-4fbb-bb01-a8449e1ecad1" 00:11:50.152 ], 00:11:50.152 "product_name": "Malloc disk", 00:11:50.152 "block_size": 512, 00:11:50.152 "num_blocks": 65536, 00:11:50.152 "uuid": "61c96a9b-ff7a-4fbb-bb01-a8449e1ecad1", 00:11:50.152 "assigned_rate_limits": { 00:11:50.152 "rw_ios_per_sec": 0, 00:11:50.152 "rw_mbytes_per_sec": 0, 00:11:50.152 "r_mbytes_per_sec": 0, 00:11:50.152 "w_mbytes_per_sec": 0 00:11:50.152 }, 00:11:50.152 "claimed": false, 00:11:50.152 "zoned": false, 00:11:50.152 "supported_io_types": { 00:11:50.152 "read": true, 00:11:50.152 "write": true, 00:11:50.152 "unmap": true, 00:11:50.152 "flush": true, 00:11:50.152 "reset": true, 00:11:50.152 "nvme_admin": false, 00:11:50.152 "nvme_io": false, 00:11:50.152 "nvme_io_md": false, 00:11:50.152 "write_zeroes": true, 00:11:50.152 "zcopy": true, 00:11:50.152 "get_zone_info": false, 00:11:50.152 "zone_management": false, 00:11:50.152 "zone_append": false, 00:11:50.152 "compare": false, 00:11:50.152 "compare_and_write": false, 00:11:50.152 "abort": true, 00:11:50.152 "seek_hole": false, 00:11:50.152 "seek_data": false, 00:11:50.152 "copy": true, 00:11:50.152 "nvme_iov_md": false 00:11:50.152 }, 00:11:50.152 "memory_domains": [ 00:11:50.153 { 00:11:50.153 "dma_device_id": "system", 00:11:50.153 "dma_device_type": 1 00:11:50.153 }, 00:11:50.153 { 00:11:50.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.153 "dma_device_type": 2 00:11:50.153 } 00:11:50.153 ], 00:11:50.153 "driver_specific": {} 00:11:50.153 } 00:11:50.153 ] 00:11:50.153 14:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.153 14:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:50.153 14:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:50.153 14:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:50.153 14:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:50.153 14:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.153 14:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.153 [2024-12-09 14:44:28.059735] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:50.153 [2024-12-09 14:44:28.059833] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:50.153 [2024-12-09 14:44:28.059882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:50.153 [2024-12-09 14:44:28.061888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:50.153 [2024-12-09 14:44:28.061941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:50.153 14:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.153 14:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:50.153 14:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.153 14:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.153 14:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.153 14:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.153 14:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.153 14:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.153 14:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.153 14:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.153 14:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.153 14:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.153 14:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.153 14:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.153 14:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.153 14:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.153 14:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.153 "name": "Existed_Raid", 00:11:50.153 "uuid": "7e5173b0-7735-4c09-bbb0-1b1affe11b13", 00:11:50.153 "strip_size_kb": 0, 00:11:50.153 "state": "configuring", 00:11:50.153 "raid_level": "raid1", 00:11:50.153 "superblock": true, 00:11:50.153 "num_base_bdevs": 4, 00:11:50.153 "num_base_bdevs_discovered": 3, 00:11:50.153 "num_base_bdevs_operational": 4, 00:11:50.153 "base_bdevs_list": [ 00:11:50.153 { 00:11:50.153 "name": "BaseBdev1", 00:11:50.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.153 "is_configured": false, 00:11:50.153 "data_offset": 0, 00:11:50.153 "data_size": 0 00:11:50.153 }, 00:11:50.153 { 00:11:50.153 "name": "BaseBdev2", 00:11:50.153 "uuid": "a6069c2e-481f-4876-b481-7eb43a9dd9a6", 00:11:50.153 "is_configured": true, 00:11:50.153 "data_offset": 2048, 00:11:50.153 "data_size": 63488 00:11:50.153 }, 00:11:50.153 { 00:11:50.153 "name": "BaseBdev3", 00:11:50.153 "uuid": "7dd82821-d44e-483e-ae8b-529d64a9c27d", 00:11:50.153 "is_configured": true, 00:11:50.153 "data_offset": 2048, 00:11:50.153 "data_size": 63488 00:11:50.153 }, 00:11:50.153 { 00:11:50.153 "name": "BaseBdev4", 00:11:50.153 "uuid": "61c96a9b-ff7a-4fbb-bb01-a8449e1ecad1", 00:11:50.153 "is_configured": true, 00:11:50.153 "data_offset": 2048, 00:11:50.153 "data_size": 63488 00:11:50.153 } 00:11:50.153 ] 00:11:50.153 }' 00:11:50.153 14:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.153 14:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.412 14:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:50.412 14:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.412 14:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.413 [2024-12-09 14:44:28.518988] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:50.413 14:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.413 14:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:50.413 14:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.413 14:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.413 14:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.413 14:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.413 14:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.413 14:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.413 14:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.413 14:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.413 14:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.413 14:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.413 14:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.413 14:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.413 14:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.672 14:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.672 14:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.672 "name": "Existed_Raid", 00:11:50.672 "uuid": "7e5173b0-7735-4c09-bbb0-1b1affe11b13", 00:11:50.672 "strip_size_kb": 0, 00:11:50.672 "state": "configuring", 00:11:50.672 "raid_level": "raid1", 00:11:50.672 "superblock": true, 00:11:50.672 "num_base_bdevs": 4, 00:11:50.672 "num_base_bdevs_discovered": 2, 00:11:50.672 "num_base_bdevs_operational": 4, 00:11:50.672 "base_bdevs_list": [ 00:11:50.672 { 00:11:50.672 "name": "BaseBdev1", 00:11:50.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.672 "is_configured": false, 00:11:50.672 "data_offset": 0, 00:11:50.672 "data_size": 0 00:11:50.672 }, 00:11:50.672 { 00:11:50.672 "name": null, 00:11:50.672 "uuid": "a6069c2e-481f-4876-b481-7eb43a9dd9a6", 00:11:50.672 "is_configured": false, 00:11:50.672 "data_offset": 0, 00:11:50.672 "data_size": 63488 00:11:50.672 }, 00:11:50.672 { 00:11:50.672 "name": "BaseBdev3", 00:11:50.672 "uuid": "7dd82821-d44e-483e-ae8b-529d64a9c27d", 00:11:50.672 "is_configured": true, 00:11:50.672 "data_offset": 2048, 00:11:50.672 "data_size": 63488 00:11:50.672 }, 00:11:50.672 { 00:11:50.672 "name": "BaseBdev4", 00:11:50.672 "uuid": "61c96a9b-ff7a-4fbb-bb01-a8449e1ecad1", 00:11:50.672 "is_configured": true, 00:11:50.672 "data_offset": 2048, 00:11:50.672 "data_size": 63488 00:11:50.672 } 00:11:50.672 ] 00:11:50.672 }' 00:11:50.673 14:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.673 14:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.932 14:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.932 14:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.932 14:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.932 14:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:50.932 14:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.932 14:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:50.932 14:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:50.932 14:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.932 14:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.932 [2024-12-09 14:44:29.043331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:50.932 BaseBdev1 00:11:50.932 14:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.932 14:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:50.932 14:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:50.932 14:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:50.932 14:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:50.932 14:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:50.932 14:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:50.932 14:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:50.932 14:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.932 14:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.190 14:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.190 14:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:51.190 14:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.190 14:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.190 [ 00:11:51.190 { 00:11:51.190 "name": "BaseBdev1", 00:11:51.190 "aliases": [ 00:11:51.190 "ff824dbf-8c57-417d-82e4-f0432ff2314d" 00:11:51.190 ], 00:11:51.190 "product_name": "Malloc disk", 00:11:51.190 "block_size": 512, 00:11:51.190 "num_blocks": 65536, 00:11:51.190 "uuid": "ff824dbf-8c57-417d-82e4-f0432ff2314d", 00:11:51.190 "assigned_rate_limits": { 00:11:51.190 "rw_ios_per_sec": 0, 00:11:51.190 "rw_mbytes_per_sec": 0, 00:11:51.190 "r_mbytes_per_sec": 0, 00:11:51.190 "w_mbytes_per_sec": 0 00:11:51.190 }, 00:11:51.190 "claimed": true, 00:11:51.190 "claim_type": "exclusive_write", 00:11:51.190 "zoned": false, 00:11:51.190 "supported_io_types": { 00:11:51.190 "read": true, 00:11:51.190 "write": true, 00:11:51.190 "unmap": true, 00:11:51.190 "flush": true, 00:11:51.190 "reset": true, 00:11:51.190 "nvme_admin": false, 00:11:51.190 "nvme_io": false, 00:11:51.190 "nvme_io_md": false, 00:11:51.190 "write_zeroes": true, 00:11:51.190 "zcopy": true, 00:11:51.190 "get_zone_info": false, 00:11:51.190 "zone_management": false, 00:11:51.190 "zone_append": false, 00:11:51.190 "compare": false, 00:11:51.190 "compare_and_write": false, 00:11:51.190 "abort": true, 00:11:51.190 "seek_hole": false, 00:11:51.190 "seek_data": false, 00:11:51.190 "copy": true, 00:11:51.190 "nvme_iov_md": false 00:11:51.190 }, 00:11:51.190 "memory_domains": [ 00:11:51.190 { 00:11:51.190 "dma_device_id": "system", 00:11:51.190 "dma_device_type": 1 00:11:51.190 }, 00:11:51.190 { 00:11:51.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.190 "dma_device_type": 2 00:11:51.190 } 00:11:51.190 ], 00:11:51.190 "driver_specific": {} 00:11:51.190 } 00:11:51.190 ] 00:11:51.190 14:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.190 14:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:51.190 14:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:51.190 14:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.190 14:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.190 14:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.190 14:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.190 14:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:51.190 14:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.190 14:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.190 14:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.190 14:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.190 14:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.191 14:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.191 14:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.191 14:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.191 14:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.191 14:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.191 "name": "Existed_Raid", 00:11:51.191 "uuid": "7e5173b0-7735-4c09-bbb0-1b1affe11b13", 00:11:51.191 "strip_size_kb": 0, 00:11:51.191 "state": "configuring", 00:11:51.191 "raid_level": "raid1", 00:11:51.191 "superblock": true, 00:11:51.191 "num_base_bdevs": 4, 00:11:51.191 "num_base_bdevs_discovered": 3, 00:11:51.191 "num_base_bdevs_operational": 4, 00:11:51.191 "base_bdevs_list": [ 00:11:51.191 { 00:11:51.191 "name": "BaseBdev1", 00:11:51.191 "uuid": "ff824dbf-8c57-417d-82e4-f0432ff2314d", 00:11:51.191 "is_configured": true, 00:11:51.191 "data_offset": 2048, 00:11:51.191 "data_size": 63488 00:11:51.191 }, 00:11:51.191 { 00:11:51.191 "name": null, 00:11:51.191 "uuid": "a6069c2e-481f-4876-b481-7eb43a9dd9a6", 00:11:51.191 "is_configured": false, 00:11:51.191 "data_offset": 0, 00:11:51.191 "data_size": 63488 00:11:51.191 }, 00:11:51.191 { 00:11:51.191 "name": "BaseBdev3", 00:11:51.191 "uuid": "7dd82821-d44e-483e-ae8b-529d64a9c27d", 00:11:51.191 "is_configured": true, 00:11:51.191 "data_offset": 2048, 00:11:51.191 "data_size": 63488 00:11:51.191 }, 00:11:51.191 { 00:11:51.191 "name": "BaseBdev4", 00:11:51.191 "uuid": "61c96a9b-ff7a-4fbb-bb01-a8449e1ecad1", 00:11:51.191 "is_configured": true, 00:11:51.191 "data_offset": 2048, 00:11:51.191 "data_size": 63488 00:11:51.191 } 00:11:51.191 ] 00:11:51.191 }' 00:11:51.191 14:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.191 14:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.448 14:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.448 14:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.448 14:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.448 14:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:51.448 14:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.707 14:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:51.707 14:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:51.707 14:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.707 14:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.707 [2024-12-09 14:44:29.590525] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:51.707 14:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.707 14:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:51.707 14:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.707 14:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.707 14:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.707 14:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.707 14:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:51.707 14:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.707 14:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.707 14:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.707 14:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.707 14:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.707 14:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.707 14:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.707 14:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.707 14:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.707 14:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.707 "name": "Existed_Raid", 00:11:51.707 "uuid": "7e5173b0-7735-4c09-bbb0-1b1affe11b13", 00:11:51.707 "strip_size_kb": 0, 00:11:51.707 "state": "configuring", 00:11:51.707 "raid_level": "raid1", 00:11:51.707 "superblock": true, 00:11:51.707 "num_base_bdevs": 4, 00:11:51.707 "num_base_bdevs_discovered": 2, 00:11:51.707 "num_base_bdevs_operational": 4, 00:11:51.707 "base_bdevs_list": [ 00:11:51.707 { 00:11:51.707 "name": "BaseBdev1", 00:11:51.707 "uuid": "ff824dbf-8c57-417d-82e4-f0432ff2314d", 00:11:51.707 "is_configured": true, 00:11:51.707 "data_offset": 2048, 00:11:51.707 "data_size": 63488 00:11:51.707 }, 00:11:51.707 { 00:11:51.707 "name": null, 00:11:51.707 "uuid": "a6069c2e-481f-4876-b481-7eb43a9dd9a6", 00:11:51.707 "is_configured": false, 00:11:51.707 "data_offset": 0, 00:11:51.707 "data_size": 63488 00:11:51.707 }, 00:11:51.707 { 00:11:51.707 "name": null, 00:11:51.707 "uuid": "7dd82821-d44e-483e-ae8b-529d64a9c27d", 00:11:51.707 "is_configured": false, 00:11:51.707 "data_offset": 0, 00:11:51.707 "data_size": 63488 00:11:51.707 }, 00:11:51.707 { 00:11:51.707 "name": "BaseBdev4", 00:11:51.707 "uuid": "61c96a9b-ff7a-4fbb-bb01-a8449e1ecad1", 00:11:51.707 "is_configured": true, 00:11:51.707 "data_offset": 2048, 00:11:51.707 "data_size": 63488 00:11:51.707 } 00:11:51.707 ] 00:11:51.707 }' 00:11:51.707 14:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.707 14:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.966 14:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.966 14:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.966 14:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.966 14:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:51.966 14:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.225 14:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:52.225 14:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:52.225 14:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.225 14:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.225 [2024-12-09 14:44:30.109647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:52.225 14:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.225 14:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:52.225 14:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.225 14:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.225 14:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.225 14:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.225 14:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.225 14:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.225 14:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.225 14:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.225 14:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.225 14:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.225 14:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.225 14:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.225 14:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.225 14:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.225 14:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.225 "name": "Existed_Raid", 00:11:52.225 "uuid": "7e5173b0-7735-4c09-bbb0-1b1affe11b13", 00:11:52.225 "strip_size_kb": 0, 00:11:52.225 "state": "configuring", 00:11:52.225 "raid_level": "raid1", 00:11:52.225 "superblock": true, 00:11:52.225 "num_base_bdevs": 4, 00:11:52.225 "num_base_bdevs_discovered": 3, 00:11:52.225 "num_base_bdevs_operational": 4, 00:11:52.225 "base_bdevs_list": [ 00:11:52.225 { 00:11:52.225 "name": "BaseBdev1", 00:11:52.225 "uuid": "ff824dbf-8c57-417d-82e4-f0432ff2314d", 00:11:52.225 "is_configured": true, 00:11:52.225 "data_offset": 2048, 00:11:52.225 "data_size": 63488 00:11:52.225 }, 00:11:52.225 { 00:11:52.225 "name": null, 00:11:52.225 "uuid": "a6069c2e-481f-4876-b481-7eb43a9dd9a6", 00:11:52.225 "is_configured": false, 00:11:52.225 "data_offset": 0, 00:11:52.225 "data_size": 63488 00:11:52.225 }, 00:11:52.225 { 00:11:52.225 "name": "BaseBdev3", 00:11:52.225 "uuid": "7dd82821-d44e-483e-ae8b-529d64a9c27d", 00:11:52.225 "is_configured": true, 00:11:52.225 "data_offset": 2048, 00:11:52.225 "data_size": 63488 00:11:52.225 }, 00:11:52.225 { 00:11:52.225 "name": "BaseBdev4", 00:11:52.225 "uuid": "61c96a9b-ff7a-4fbb-bb01-a8449e1ecad1", 00:11:52.225 "is_configured": true, 00:11:52.225 "data_offset": 2048, 00:11:52.225 "data_size": 63488 00:11:52.225 } 00:11:52.225 ] 00:11:52.225 }' 00:11:52.225 14:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.225 14:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.484 14:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.484 14:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:52.484 14:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.484 14:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.484 14:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.484 14:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:52.484 14:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:52.484 14:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.484 14:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.484 [2024-12-09 14:44:30.560907] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:52.743 14:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.743 14:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:52.743 14:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.743 14:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.743 14:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.743 14:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.743 14:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.743 14:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.743 14:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.743 14:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.743 14:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.743 14:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.743 14:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.743 14:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.743 14:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.743 14:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.743 14:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.743 "name": "Existed_Raid", 00:11:52.743 "uuid": "7e5173b0-7735-4c09-bbb0-1b1affe11b13", 00:11:52.743 "strip_size_kb": 0, 00:11:52.743 "state": "configuring", 00:11:52.743 "raid_level": "raid1", 00:11:52.743 "superblock": true, 00:11:52.743 "num_base_bdevs": 4, 00:11:52.743 "num_base_bdevs_discovered": 2, 00:11:52.743 "num_base_bdevs_operational": 4, 00:11:52.743 "base_bdevs_list": [ 00:11:52.743 { 00:11:52.743 "name": null, 00:11:52.743 "uuid": "ff824dbf-8c57-417d-82e4-f0432ff2314d", 00:11:52.743 "is_configured": false, 00:11:52.743 "data_offset": 0, 00:11:52.743 "data_size": 63488 00:11:52.743 }, 00:11:52.743 { 00:11:52.743 "name": null, 00:11:52.743 "uuid": "a6069c2e-481f-4876-b481-7eb43a9dd9a6", 00:11:52.743 "is_configured": false, 00:11:52.743 "data_offset": 0, 00:11:52.743 "data_size": 63488 00:11:52.743 }, 00:11:52.743 { 00:11:52.743 "name": "BaseBdev3", 00:11:52.743 "uuid": "7dd82821-d44e-483e-ae8b-529d64a9c27d", 00:11:52.743 "is_configured": true, 00:11:52.743 "data_offset": 2048, 00:11:52.743 "data_size": 63488 00:11:52.743 }, 00:11:52.743 { 00:11:52.743 "name": "BaseBdev4", 00:11:52.743 "uuid": "61c96a9b-ff7a-4fbb-bb01-a8449e1ecad1", 00:11:52.743 "is_configured": true, 00:11:52.743 "data_offset": 2048, 00:11:52.743 "data_size": 63488 00:11:52.743 } 00:11:52.743 ] 00:11:52.743 }' 00:11:52.743 14:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.743 14:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.003 14:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:53.003 14:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.003 14:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.003 14:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.003 14:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.003 14:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:53.003 14:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:53.003 14:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.003 14:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.263 [2024-12-09 14:44:31.128880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:53.263 14:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.263 14:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:53.263 14:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.263 14:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:53.263 14:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.263 14:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.263 14:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:53.263 14:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.264 14:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.264 14:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.264 14:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.264 14:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.264 14:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.264 14:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.264 14:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.264 14:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.264 14:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.264 "name": "Existed_Raid", 00:11:53.264 "uuid": "7e5173b0-7735-4c09-bbb0-1b1affe11b13", 00:11:53.264 "strip_size_kb": 0, 00:11:53.264 "state": "configuring", 00:11:53.264 "raid_level": "raid1", 00:11:53.264 "superblock": true, 00:11:53.264 "num_base_bdevs": 4, 00:11:53.264 "num_base_bdevs_discovered": 3, 00:11:53.264 "num_base_bdevs_operational": 4, 00:11:53.264 "base_bdevs_list": [ 00:11:53.264 { 00:11:53.264 "name": null, 00:11:53.264 "uuid": "ff824dbf-8c57-417d-82e4-f0432ff2314d", 00:11:53.264 "is_configured": false, 00:11:53.264 "data_offset": 0, 00:11:53.264 "data_size": 63488 00:11:53.264 }, 00:11:53.264 { 00:11:53.264 "name": "BaseBdev2", 00:11:53.264 "uuid": "a6069c2e-481f-4876-b481-7eb43a9dd9a6", 00:11:53.264 "is_configured": true, 00:11:53.264 "data_offset": 2048, 00:11:53.264 "data_size": 63488 00:11:53.264 }, 00:11:53.264 { 00:11:53.264 "name": "BaseBdev3", 00:11:53.264 "uuid": "7dd82821-d44e-483e-ae8b-529d64a9c27d", 00:11:53.264 "is_configured": true, 00:11:53.264 "data_offset": 2048, 00:11:53.264 "data_size": 63488 00:11:53.264 }, 00:11:53.264 { 00:11:53.264 "name": "BaseBdev4", 00:11:53.264 "uuid": "61c96a9b-ff7a-4fbb-bb01-a8449e1ecad1", 00:11:53.264 "is_configured": true, 00:11:53.264 "data_offset": 2048, 00:11:53.264 "data_size": 63488 00:11:53.264 } 00:11:53.264 ] 00:11:53.264 }' 00:11:53.264 14:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.264 14:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.523 14:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.523 14:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:53.523 14:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.523 14:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.523 14:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.523 14:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:53.523 14:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.523 14:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:53.523 14:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.523 14:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.523 14:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.783 14:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ff824dbf-8c57-417d-82e4-f0432ff2314d 00:11:53.783 14:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.783 14:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.783 [2024-12-09 14:44:31.695215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:53.783 [2024-12-09 14:44:31.695617] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:53.783 [2024-12-09 14:44:31.695680] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:53.783 [2024-12-09 14:44:31.696006] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:53.783 [2024-12-09 14:44:31.696248] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:53.783 NewBaseBdev 00:11:53.783 [2024-12-09 14:44:31.696301] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:53.783 [2024-12-09 14:44:31.696500] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.783 14:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.783 14:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:53.783 14:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:53.783 14:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:53.783 14:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:53.783 14:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:53.783 14:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:53.783 14:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:53.783 14:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.783 14:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.783 14:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.783 14:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:53.783 14:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.783 14:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.783 [ 00:11:53.783 { 00:11:53.783 "name": "NewBaseBdev", 00:11:53.783 "aliases": [ 00:11:53.783 "ff824dbf-8c57-417d-82e4-f0432ff2314d" 00:11:53.783 ], 00:11:53.783 "product_name": "Malloc disk", 00:11:53.783 "block_size": 512, 00:11:53.783 "num_blocks": 65536, 00:11:53.783 "uuid": "ff824dbf-8c57-417d-82e4-f0432ff2314d", 00:11:53.783 "assigned_rate_limits": { 00:11:53.783 "rw_ios_per_sec": 0, 00:11:53.783 "rw_mbytes_per_sec": 0, 00:11:53.783 "r_mbytes_per_sec": 0, 00:11:53.783 "w_mbytes_per_sec": 0 00:11:53.783 }, 00:11:53.783 "claimed": true, 00:11:53.783 "claim_type": "exclusive_write", 00:11:53.783 "zoned": false, 00:11:53.783 "supported_io_types": { 00:11:53.783 "read": true, 00:11:53.783 "write": true, 00:11:53.783 "unmap": true, 00:11:53.783 "flush": true, 00:11:53.783 "reset": true, 00:11:53.783 "nvme_admin": false, 00:11:53.783 "nvme_io": false, 00:11:53.783 "nvme_io_md": false, 00:11:53.783 "write_zeroes": true, 00:11:53.783 "zcopy": true, 00:11:53.783 "get_zone_info": false, 00:11:53.783 "zone_management": false, 00:11:53.783 "zone_append": false, 00:11:53.783 "compare": false, 00:11:53.783 "compare_and_write": false, 00:11:53.783 "abort": true, 00:11:53.783 "seek_hole": false, 00:11:53.783 "seek_data": false, 00:11:53.783 "copy": true, 00:11:53.783 "nvme_iov_md": false 00:11:53.783 }, 00:11:53.783 "memory_domains": [ 00:11:53.783 { 00:11:53.783 "dma_device_id": "system", 00:11:53.783 "dma_device_type": 1 00:11:53.783 }, 00:11:53.783 { 00:11:53.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.783 "dma_device_type": 2 00:11:53.783 } 00:11:53.783 ], 00:11:53.783 "driver_specific": {} 00:11:53.783 } 00:11:53.783 ] 00:11:53.783 14:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.783 14:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:53.783 14:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:53.783 14:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.783 14:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.783 14:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.783 14:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.783 14:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:53.783 14:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.783 14:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.783 14:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.783 14:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.783 14:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.783 14:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.783 14:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.783 14:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.783 14:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.784 14:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.784 "name": "Existed_Raid", 00:11:53.784 "uuid": "7e5173b0-7735-4c09-bbb0-1b1affe11b13", 00:11:53.784 "strip_size_kb": 0, 00:11:53.784 "state": "online", 00:11:53.784 "raid_level": "raid1", 00:11:53.784 "superblock": true, 00:11:53.784 "num_base_bdevs": 4, 00:11:53.784 "num_base_bdevs_discovered": 4, 00:11:53.784 "num_base_bdevs_operational": 4, 00:11:53.784 "base_bdevs_list": [ 00:11:53.784 { 00:11:53.784 "name": "NewBaseBdev", 00:11:53.784 "uuid": "ff824dbf-8c57-417d-82e4-f0432ff2314d", 00:11:53.784 "is_configured": true, 00:11:53.784 "data_offset": 2048, 00:11:53.784 "data_size": 63488 00:11:53.784 }, 00:11:53.784 { 00:11:53.784 "name": "BaseBdev2", 00:11:53.784 "uuid": "a6069c2e-481f-4876-b481-7eb43a9dd9a6", 00:11:53.784 "is_configured": true, 00:11:53.784 "data_offset": 2048, 00:11:53.784 "data_size": 63488 00:11:53.784 }, 00:11:53.784 { 00:11:53.784 "name": "BaseBdev3", 00:11:53.784 "uuid": "7dd82821-d44e-483e-ae8b-529d64a9c27d", 00:11:53.784 "is_configured": true, 00:11:53.784 "data_offset": 2048, 00:11:53.784 "data_size": 63488 00:11:53.784 }, 00:11:53.784 { 00:11:53.784 "name": "BaseBdev4", 00:11:53.784 "uuid": "61c96a9b-ff7a-4fbb-bb01-a8449e1ecad1", 00:11:53.784 "is_configured": true, 00:11:53.784 "data_offset": 2048, 00:11:53.784 "data_size": 63488 00:11:53.784 } 00:11:53.784 ] 00:11:53.784 }' 00:11:53.784 14:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.784 14:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.353 14:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:54.353 14:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:54.353 14:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:54.353 14:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:54.353 14:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:54.353 14:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:54.353 14:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:54.353 14:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.353 14:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.353 14:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:54.353 [2024-12-09 14:44:32.190918] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:54.353 14:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.353 14:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:54.353 "name": "Existed_Raid", 00:11:54.353 "aliases": [ 00:11:54.353 "7e5173b0-7735-4c09-bbb0-1b1affe11b13" 00:11:54.353 ], 00:11:54.353 "product_name": "Raid Volume", 00:11:54.353 "block_size": 512, 00:11:54.353 "num_blocks": 63488, 00:11:54.353 "uuid": "7e5173b0-7735-4c09-bbb0-1b1affe11b13", 00:11:54.353 "assigned_rate_limits": { 00:11:54.353 "rw_ios_per_sec": 0, 00:11:54.353 "rw_mbytes_per_sec": 0, 00:11:54.353 "r_mbytes_per_sec": 0, 00:11:54.353 "w_mbytes_per_sec": 0 00:11:54.353 }, 00:11:54.353 "claimed": false, 00:11:54.353 "zoned": false, 00:11:54.353 "supported_io_types": { 00:11:54.353 "read": true, 00:11:54.353 "write": true, 00:11:54.353 "unmap": false, 00:11:54.353 "flush": false, 00:11:54.353 "reset": true, 00:11:54.353 "nvme_admin": false, 00:11:54.353 "nvme_io": false, 00:11:54.353 "nvme_io_md": false, 00:11:54.353 "write_zeroes": true, 00:11:54.353 "zcopy": false, 00:11:54.353 "get_zone_info": false, 00:11:54.353 "zone_management": false, 00:11:54.353 "zone_append": false, 00:11:54.353 "compare": false, 00:11:54.353 "compare_and_write": false, 00:11:54.353 "abort": false, 00:11:54.353 "seek_hole": false, 00:11:54.353 "seek_data": false, 00:11:54.353 "copy": false, 00:11:54.353 "nvme_iov_md": false 00:11:54.353 }, 00:11:54.353 "memory_domains": [ 00:11:54.353 { 00:11:54.353 "dma_device_id": "system", 00:11:54.353 "dma_device_type": 1 00:11:54.353 }, 00:11:54.353 { 00:11:54.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.353 "dma_device_type": 2 00:11:54.353 }, 00:11:54.353 { 00:11:54.353 "dma_device_id": "system", 00:11:54.353 "dma_device_type": 1 00:11:54.353 }, 00:11:54.353 { 00:11:54.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.353 "dma_device_type": 2 00:11:54.353 }, 00:11:54.353 { 00:11:54.353 "dma_device_id": "system", 00:11:54.353 "dma_device_type": 1 00:11:54.353 }, 00:11:54.353 { 00:11:54.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.353 "dma_device_type": 2 00:11:54.353 }, 00:11:54.353 { 00:11:54.353 "dma_device_id": "system", 00:11:54.353 "dma_device_type": 1 00:11:54.353 }, 00:11:54.353 { 00:11:54.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.353 "dma_device_type": 2 00:11:54.353 } 00:11:54.353 ], 00:11:54.353 "driver_specific": { 00:11:54.353 "raid": { 00:11:54.353 "uuid": "7e5173b0-7735-4c09-bbb0-1b1affe11b13", 00:11:54.353 "strip_size_kb": 0, 00:11:54.353 "state": "online", 00:11:54.353 "raid_level": "raid1", 00:11:54.354 "superblock": true, 00:11:54.354 "num_base_bdevs": 4, 00:11:54.354 "num_base_bdevs_discovered": 4, 00:11:54.354 "num_base_bdevs_operational": 4, 00:11:54.354 "base_bdevs_list": [ 00:11:54.354 { 00:11:54.354 "name": "NewBaseBdev", 00:11:54.354 "uuid": "ff824dbf-8c57-417d-82e4-f0432ff2314d", 00:11:54.354 "is_configured": true, 00:11:54.354 "data_offset": 2048, 00:11:54.354 "data_size": 63488 00:11:54.354 }, 00:11:54.354 { 00:11:54.354 "name": "BaseBdev2", 00:11:54.354 "uuid": "a6069c2e-481f-4876-b481-7eb43a9dd9a6", 00:11:54.354 "is_configured": true, 00:11:54.354 "data_offset": 2048, 00:11:54.354 "data_size": 63488 00:11:54.354 }, 00:11:54.354 { 00:11:54.354 "name": "BaseBdev3", 00:11:54.354 "uuid": "7dd82821-d44e-483e-ae8b-529d64a9c27d", 00:11:54.354 "is_configured": true, 00:11:54.354 "data_offset": 2048, 00:11:54.354 "data_size": 63488 00:11:54.354 }, 00:11:54.354 { 00:11:54.354 "name": "BaseBdev4", 00:11:54.354 "uuid": "61c96a9b-ff7a-4fbb-bb01-a8449e1ecad1", 00:11:54.354 "is_configured": true, 00:11:54.354 "data_offset": 2048, 00:11:54.354 "data_size": 63488 00:11:54.354 } 00:11:54.354 ] 00:11:54.354 } 00:11:54.354 } 00:11:54.354 }' 00:11:54.354 14:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:54.354 14:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:54.354 BaseBdev2 00:11:54.354 BaseBdev3 00:11:54.354 BaseBdev4' 00:11:54.354 14:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.354 14:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:54.354 14:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.354 14:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:54.354 14:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.354 14:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.354 14:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.354 14:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.354 14:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.354 14:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.354 14:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.354 14:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.354 14:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:54.354 14:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.354 14:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.354 14:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.354 14:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.354 14:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.354 14:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.354 14:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.354 14:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:54.354 14:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.354 14:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.354 14:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.354 14:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.354 14:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.354 14:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.354 14:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:54.354 14:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.354 14:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.354 14:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.354 14:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.613 14:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.613 14:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.613 14:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:54.613 14:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.613 14:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.613 [2024-12-09 14:44:32.494001] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:54.613 [2024-12-09 14:44:32.494085] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:54.613 [2024-12-09 14:44:32.494204] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:54.613 [2024-12-09 14:44:32.494560] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:54.613 [2024-12-09 14:44:32.494643] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:54.613 14:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.613 14:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 75166 00:11:54.613 14:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75166 ']' 00:11:54.613 14:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 75166 00:11:54.613 14:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:54.613 14:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:54.613 14:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75166 00:11:54.613 14:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:54.613 14:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:54.614 14:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75166' 00:11:54.614 killing process with pid 75166 00:11:54.614 14:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 75166 00:11:54.614 [2024-12-09 14:44:32.542253] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:54.614 14:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 75166 00:11:54.872 [2024-12-09 14:44:32.969193] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:56.250 14:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:56.250 00:11:56.250 real 0m11.884s 00:11:56.250 user 0m18.818s 00:11:56.250 sys 0m2.065s 00:11:56.251 14:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.251 14:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.251 ************************************ 00:11:56.251 END TEST raid_state_function_test_sb 00:11:56.251 ************************************ 00:11:56.251 14:44:34 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:11:56.251 14:44:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:56.251 14:44:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.251 14:44:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:56.251 ************************************ 00:11:56.251 START TEST raid_superblock_test 00:11:56.251 ************************************ 00:11:56.251 14:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:11:56.251 14:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:56.251 14:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:56.251 14:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:56.251 14:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:56.251 14:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:56.251 14:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:56.251 14:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:56.251 14:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:56.251 14:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:56.251 14:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:56.251 14:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:56.251 14:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:56.251 14:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:56.251 14:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:56.251 14:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:56.251 14:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=75835 00:11:56.251 14:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 75835 00:11:56.251 14:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:56.251 14:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 75835 ']' 00:11:56.251 14:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.251 14:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:56.251 14:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.251 14:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:56.251 14:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.251 [2024-12-09 14:44:34.327097] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:11:56.251 [2024-12-09 14:44:34.327214] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75835 ] 00:11:56.511 [2024-12-09 14:44:34.503409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.511 [2024-12-09 14:44:34.622466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.770 [2024-12-09 14:44:34.834864] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:56.770 [2024-12-09 14:44:34.834910] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:57.339 14:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:57.339 14:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:57.339 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:57.339 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:57.339 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:57.339 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:57.339 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:57.339 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:57.339 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:57.339 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:57.339 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:57.339 14:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.339 14:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.339 malloc1 00:11:57.339 14:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.339 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:57.339 14:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.339 14:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.339 [2024-12-09 14:44:35.242242] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:57.339 [2024-12-09 14:44:35.242359] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.339 [2024-12-09 14:44:35.242421] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:57.339 [2024-12-09 14:44:35.242459] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.339 [2024-12-09 14:44:35.244946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.339 [2024-12-09 14:44:35.245023] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:57.339 pt1 00:11:57.339 14:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.339 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:57.339 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:57.339 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.340 malloc2 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.340 [2024-12-09 14:44:35.305253] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:57.340 [2024-12-09 14:44:35.305317] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.340 [2024-12-09 14:44:35.305347] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:57.340 [2024-12-09 14:44:35.305357] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.340 [2024-12-09 14:44:35.307776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.340 [2024-12-09 14:44:35.307813] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:57.340 pt2 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.340 malloc3 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.340 [2024-12-09 14:44:35.382662] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:57.340 [2024-12-09 14:44:35.382783] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.340 [2024-12-09 14:44:35.382836] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:57.340 [2024-12-09 14:44:35.382871] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.340 [2024-12-09 14:44:35.385219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.340 [2024-12-09 14:44:35.385295] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:57.340 pt3 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.340 malloc4 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.340 [2024-12-09 14:44:35.443123] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:57.340 [2024-12-09 14:44:35.443241] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.340 [2024-12-09 14:44:35.443291] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:57.340 [2024-12-09 14:44:35.443334] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.340 [2024-12-09 14:44:35.445685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.340 [2024-12-09 14:44:35.445750] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:57.340 pt4 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.340 14:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.340 [2024-12-09 14:44:35.455178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:57.340 [2024-12-09 14:44:35.457786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:57.340 [2024-12-09 14:44:35.457968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:57.340 [2024-12-09 14:44:35.458072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:57.340 [2024-12-09 14:44:35.458353] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:57.340 [2024-12-09 14:44:35.458381] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:57.340 [2024-12-09 14:44:35.458767] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:57.340 [2024-12-09 14:44:35.459033] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:57.340 [2024-12-09 14:44:35.459077] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:57.340 [2024-12-09 14:44:35.459358] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.601 14:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.601 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:57.601 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.601 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.601 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.601 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.601 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.601 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.601 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.601 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.601 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.601 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.601 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.601 14:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.601 14:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.601 14:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.601 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.601 "name": "raid_bdev1", 00:11:57.601 "uuid": "c3664748-1b64-4f33-9c53-a5c825e1bd23", 00:11:57.601 "strip_size_kb": 0, 00:11:57.601 "state": "online", 00:11:57.601 "raid_level": "raid1", 00:11:57.601 "superblock": true, 00:11:57.601 "num_base_bdevs": 4, 00:11:57.601 "num_base_bdevs_discovered": 4, 00:11:57.601 "num_base_bdevs_operational": 4, 00:11:57.601 "base_bdevs_list": [ 00:11:57.601 { 00:11:57.601 "name": "pt1", 00:11:57.601 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:57.601 "is_configured": true, 00:11:57.601 "data_offset": 2048, 00:11:57.601 "data_size": 63488 00:11:57.601 }, 00:11:57.601 { 00:11:57.601 "name": "pt2", 00:11:57.601 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:57.601 "is_configured": true, 00:11:57.601 "data_offset": 2048, 00:11:57.601 "data_size": 63488 00:11:57.601 }, 00:11:57.601 { 00:11:57.601 "name": "pt3", 00:11:57.601 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:57.601 "is_configured": true, 00:11:57.601 "data_offset": 2048, 00:11:57.601 "data_size": 63488 00:11:57.601 }, 00:11:57.601 { 00:11:57.601 "name": "pt4", 00:11:57.601 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:57.601 "is_configured": true, 00:11:57.601 "data_offset": 2048, 00:11:57.601 "data_size": 63488 00:11:57.601 } 00:11:57.601 ] 00:11:57.601 }' 00:11:57.601 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.601 14:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.861 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:57.861 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:57.861 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:57.861 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:57.861 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:57.861 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:57.861 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:57.861 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:57.861 14:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.861 14:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.861 [2024-12-09 14:44:35.918878] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:57.862 14:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.862 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:57.862 "name": "raid_bdev1", 00:11:57.862 "aliases": [ 00:11:57.862 "c3664748-1b64-4f33-9c53-a5c825e1bd23" 00:11:57.862 ], 00:11:57.862 "product_name": "Raid Volume", 00:11:57.862 "block_size": 512, 00:11:57.862 "num_blocks": 63488, 00:11:57.862 "uuid": "c3664748-1b64-4f33-9c53-a5c825e1bd23", 00:11:57.862 "assigned_rate_limits": { 00:11:57.862 "rw_ios_per_sec": 0, 00:11:57.862 "rw_mbytes_per_sec": 0, 00:11:57.862 "r_mbytes_per_sec": 0, 00:11:57.862 "w_mbytes_per_sec": 0 00:11:57.862 }, 00:11:57.862 "claimed": false, 00:11:57.862 "zoned": false, 00:11:57.862 "supported_io_types": { 00:11:57.862 "read": true, 00:11:57.862 "write": true, 00:11:57.862 "unmap": false, 00:11:57.862 "flush": false, 00:11:57.862 "reset": true, 00:11:57.862 "nvme_admin": false, 00:11:57.862 "nvme_io": false, 00:11:57.862 "nvme_io_md": false, 00:11:57.862 "write_zeroes": true, 00:11:57.862 "zcopy": false, 00:11:57.862 "get_zone_info": false, 00:11:57.862 "zone_management": false, 00:11:57.862 "zone_append": false, 00:11:57.862 "compare": false, 00:11:57.862 "compare_and_write": false, 00:11:57.862 "abort": false, 00:11:57.862 "seek_hole": false, 00:11:57.862 "seek_data": false, 00:11:57.862 "copy": false, 00:11:57.862 "nvme_iov_md": false 00:11:57.862 }, 00:11:57.862 "memory_domains": [ 00:11:57.862 { 00:11:57.862 "dma_device_id": "system", 00:11:57.862 "dma_device_type": 1 00:11:57.862 }, 00:11:57.862 { 00:11:57.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.862 "dma_device_type": 2 00:11:57.862 }, 00:11:57.862 { 00:11:57.862 "dma_device_id": "system", 00:11:57.862 "dma_device_type": 1 00:11:57.862 }, 00:11:57.862 { 00:11:57.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.862 "dma_device_type": 2 00:11:57.862 }, 00:11:57.862 { 00:11:57.862 "dma_device_id": "system", 00:11:57.862 "dma_device_type": 1 00:11:57.862 }, 00:11:57.862 { 00:11:57.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.862 "dma_device_type": 2 00:11:57.862 }, 00:11:57.862 { 00:11:57.862 "dma_device_id": "system", 00:11:57.862 "dma_device_type": 1 00:11:57.862 }, 00:11:57.862 { 00:11:57.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.862 "dma_device_type": 2 00:11:57.862 } 00:11:57.862 ], 00:11:57.862 "driver_specific": { 00:11:57.862 "raid": { 00:11:57.862 "uuid": "c3664748-1b64-4f33-9c53-a5c825e1bd23", 00:11:57.862 "strip_size_kb": 0, 00:11:57.862 "state": "online", 00:11:57.862 "raid_level": "raid1", 00:11:57.862 "superblock": true, 00:11:57.862 "num_base_bdevs": 4, 00:11:57.862 "num_base_bdevs_discovered": 4, 00:11:57.862 "num_base_bdevs_operational": 4, 00:11:57.862 "base_bdevs_list": [ 00:11:57.862 { 00:11:57.862 "name": "pt1", 00:11:57.862 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:57.862 "is_configured": true, 00:11:57.862 "data_offset": 2048, 00:11:57.862 "data_size": 63488 00:11:57.862 }, 00:11:57.862 { 00:11:57.862 "name": "pt2", 00:11:57.862 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:57.862 "is_configured": true, 00:11:57.862 "data_offset": 2048, 00:11:57.862 "data_size": 63488 00:11:57.862 }, 00:11:57.862 { 00:11:57.862 "name": "pt3", 00:11:57.862 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:57.862 "is_configured": true, 00:11:57.862 "data_offset": 2048, 00:11:57.862 "data_size": 63488 00:11:57.862 }, 00:11:57.862 { 00:11:57.862 "name": "pt4", 00:11:57.862 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:57.862 "is_configured": true, 00:11:57.862 "data_offset": 2048, 00:11:57.862 "data_size": 63488 00:11:57.862 } 00:11:57.862 ] 00:11:57.862 } 00:11:57.862 } 00:11:57.862 }' 00:11:57.862 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:58.121 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:58.121 pt2 00:11:58.121 pt3 00:11:58.121 pt4' 00:11:58.121 14:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:58.121 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:58.121 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:58.121 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:58.121 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:58.121 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.121 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.121 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.121 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:58.121 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:58.121 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:58.121 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:58.121 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.121 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.121 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:58.121 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.121 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:58.121 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:58.121 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:58.121 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:58.121 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:58.121 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.121 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.121 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.121 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:58.121 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:58.121 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:58.121 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:58.121 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:58.121 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.121 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.121 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.382 [2024-12-09 14:44:36.274204] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c3664748-1b64-4f33-9c53-a5c825e1bd23 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c3664748-1b64-4f33-9c53-a5c825e1bd23 ']' 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.382 [2024-12-09 14:44:36.321816] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:58.382 [2024-12-09 14:44:36.321897] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:58.382 [2024-12-09 14:44:36.321993] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:58.382 [2024-12-09 14:44:36.322088] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:58.382 [2024-12-09 14:44:36.322104] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.382 [2024-12-09 14:44:36.481606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:58.382 [2024-12-09 14:44:36.483776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:58.382 [2024-12-09 14:44:36.483880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:58.382 [2024-12-09 14:44:36.483947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:58.382 [2024-12-09 14:44:36.484035] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:58.382 [2024-12-09 14:44:36.484093] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:58.382 [2024-12-09 14:44:36.484115] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:58.382 [2024-12-09 14:44:36.484136] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:58.382 [2024-12-09 14:44:36.484152] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:58.382 [2024-12-09 14:44:36.484164] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:58.382 request: 00:11:58.382 { 00:11:58.382 "name": "raid_bdev1", 00:11:58.382 "raid_level": "raid1", 00:11:58.382 "base_bdevs": [ 00:11:58.382 "malloc1", 00:11:58.382 "malloc2", 00:11:58.382 "malloc3", 00:11:58.382 "malloc4" 00:11:58.382 ], 00:11:58.382 "superblock": false, 00:11:58.382 "method": "bdev_raid_create", 00:11:58.382 "req_id": 1 00:11:58.382 } 00:11:58.382 Got JSON-RPC error response 00:11:58.382 response: 00:11:58.382 { 00:11:58.382 "code": -17, 00:11:58.382 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:58.382 } 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:58.382 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.643 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:58.643 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:58.643 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:58.643 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.643 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.643 [2024-12-09 14:44:36.541440] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:58.643 [2024-12-09 14:44:36.541497] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.643 [2024-12-09 14:44:36.541516] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:58.643 [2024-12-09 14:44:36.541527] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.643 [2024-12-09 14:44:36.543749] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.643 [2024-12-09 14:44:36.543793] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:58.643 [2024-12-09 14:44:36.543883] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:58.643 [2024-12-09 14:44:36.543951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:58.643 pt1 00:11:58.643 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.643 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:58.643 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:58.643 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.643 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.643 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.643 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.643 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.643 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.643 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.643 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.643 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.643 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.643 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.643 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.643 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.643 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.643 "name": "raid_bdev1", 00:11:58.643 "uuid": "c3664748-1b64-4f33-9c53-a5c825e1bd23", 00:11:58.643 "strip_size_kb": 0, 00:11:58.643 "state": "configuring", 00:11:58.643 "raid_level": "raid1", 00:11:58.643 "superblock": true, 00:11:58.643 "num_base_bdevs": 4, 00:11:58.643 "num_base_bdevs_discovered": 1, 00:11:58.643 "num_base_bdevs_operational": 4, 00:11:58.643 "base_bdevs_list": [ 00:11:58.643 { 00:11:58.643 "name": "pt1", 00:11:58.643 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:58.643 "is_configured": true, 00:11:58.643 "data_offset": 2048, 00:11:58.643 "data_size": 63488 00:11:58.643 }, 00:11:58.643 { 00:11:58.643 "name": null, 00:11:58.643 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:58.643 "is_configured": false, 00:11:58.643 "data_offset": 2048, 00:11:58.643 "data_size": 63488 00:11:58.643 }, 00:11:58.643 { 00:11:58.643 "name": null, 00:11:58.643 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:58.643 "is_configured": false, 00:11:58.643 "data_offset": 2048, 00:11:58.643 "data_size": 63488 00:11:58.643 }, 00:11:58.643 { 00:11:58.643 "name": null, 00:11:58.643 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:58.643 "is_configured": false, 00:11:58.643 "data_offset": 2048, 00:11:58.643 "data_size": 63488 00:11:58.643 } 00:11:58.643 ] 00:11:58.643 }' 00:11:58.643 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.643 14:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.903 14:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:58.903 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:58.903 14:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.903 14:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.903 [2024-12-09 14:44:37.008705] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:58.903 [2024-12-09 14:44:37.008848] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.903 [2024-12-09 14:44:37.008914] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:58.903 [2024-12-09 14:44:37.008951] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.903 [2024-12-09 14:44:37.009476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.903 [2024-12-09 14:44:37.009543] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:58.903 [2024-12-09 14:44:37.009706] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:58.903 [2024-12-09 14:44:37.009771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:58.903 pt2 00:11:58.903 14:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.903 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:58.903 14:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.903 14:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.903 [2024-12-09 14:44:37.020688] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:59.163 14:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.163 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:59.163 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:59.163 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:59.163 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.163 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.163 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.163 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.163 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.163 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.163 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.163 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.163 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.163 14:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.163 14:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.163 14:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.163 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.163 "name": "raid_bdev1", 00:11:59.163 "uuid": "c3664748-1b64-4f33-9c53-a5c825e1bd23", 00:11:59.163 "strip_size_kb": 0, 00:11:59.163 "state": "configuring", 00:11:59.163 "raid_level": "raid1", 00:11:59.163 "superblock": true, 00:11:59.163 "num_base_bdevs": 4, 00:11:59.163 "num_base_bdevs_discovered": 1, 00:11:59.163 "num_base_bdevs_operational": 4, 00:11:59.163 "base_bdevs_list": [ 00:11:59.163 { 00:11:59.163 "name": "pt1", 00:11:59.163 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:59.163 "is_configured": true, 00:11:59.163 "data_offset": 2048, 00:11:59.163 "data_size": 63488 00:11:59.163 }, 00:11:59.163 { 00:11:59.163 "name": null, 00:11:59.163 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:59.163 "is_configured": false, 00:11:59.163 "data_offset": 0, 00:11:59.163 "data_size": 63488 00:11:59.163 }, 00:11:59.163 { 00:11:59.163 "name": null, 00:11:59.163 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:59.163 "is_configured": false, 00:11:59.163 "data_offset": 2048, 00:11:59.163 "data_size": 63488 00:11:59.163 }, 00:11:59.163 { 00:11:59.163 "name": null, 00:11:59.163 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:59.163 "is_configured": false, 00:11:59.163 "data_offset": 2048, 00:11:59.163 "data_size": 63488 00:11:59.163 } 00:11:59.163 ] 00:11:59.163 }' 00:11:59.163 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.163 14:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.423 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:59.423 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:59.423 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:59.423 14:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.423 14:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.423 [2024-12-09 14:44:37.499881] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:59.423 [2024-12-09 14:44:37.499957] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.423 [2024-12-09 14:44:37.499981] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:59.423 [2024-12-09 14:44:37.499991] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.423 [2024-12-09 14:44:37.500526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.423 [2024-12-09 14:44:37.500544] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:59.423 [2024-12-09 14:44:37.500661] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:59.423 [2024-12-09 14:44:37.500687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:59.423 pt2 00:11:59.423 14:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.423 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:59.423 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:59.423 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:59.423 14:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.423 14:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.423 [2024-12-09 14:44:37.511825] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:59.423 [2024-12-09 14:44:37.511880] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.423 [2024-12-09 14:44:37.511900] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:59.423 [2024-12-09 14:44:37.511910] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.423 [2024-12-09 14:44:37.512344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.423 [2024-12-09 14:44:37.512361] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:59.423 [2024-12-09 14:44:37.512433] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:59.423 [2024-12-09 14:44:37.512454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:59.423 pt3 00:11:59.423 14:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.423 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:59.423 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:59.423 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:59.423 14:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.423 14:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.423 [2024-12-09 14:44:37.523764] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:59.423 [2024-12-09 14:44:37.523808] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.423 [2024-12-09 14:44:37.523824] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:59.423 [2024-12-09 14:44:37.523832] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.423 [2024-12-09 14:44:37.524219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.423 [2024-12-09 14:44:37.524240] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:59.423 [2024-12-09 14:44:37.524305] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:59.423 [2024-12-09 14:44:37.524330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:59.423 [2024-12-09 14:44:37.524490] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:59.423 [2024-12-09 14:44:37.524511] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:59.423 [2024-12-09 14:44:37.524786] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:59.423 [2024-12-09 14:44:37.524965] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:59.423 [2024-12-09 14:44:37.524978] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:59.423 [2024-12-09 14:44:37.525114] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:59.423 pt4 00:11:59.423 14:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.423 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:59.423 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:59.423 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:59.423 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:59.423 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.423 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.423 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.423 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.423 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.423 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.423 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.423 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.423 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.423 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.423 14:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.423 14:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.683 14:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.683 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.683 "name": "raid_bdev1", 00:11:59.683 "uuid": "c3664748-1b64-4f33-9c53-a5c825e1bd23", 00:11:59.683 "strip_size_kb": 0, 00:11:59.683 "state": "online", 00:11:59.683 "raid_level": "raid1", 00:11:59.683 "superblock": true, 00:11:59.683 "num_base_bdevs": 4, 00:11:59.683 "num_base_bdevs_discovered": 4, 00:11:59.683 "num_base_bdevs_operational": 4, 00:11:59.683 "base_bdevs_list": [ 00:11:59.683 { 00:11:59.683 "name": "pt1", 00:11:59.683 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:59.683 "is_configured": true, 00:11:59.683 "data_offset": 2048, 00:11:59.683 "data_size": 63488 00:11:59.683 }, 00:11:59.683 { 00:11:59.683 "name": "pt2", 00:11:59.683 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:59.683 "is_configured": true, 00:11:59.683 "data_offset": 2048, 00:11:59.683 "data_size": 63488 00:11:59.683 }, 00:11:59.683 { 00:11:59.683 "name": "pt3", 00:11:59.683 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:59.683 "is_configured": true, 00:11:59.683 "data_offset": 2048, 00:11:59.683 "data_size": 63488 00:11:59.683 }, 00:11:59.683 { 00:11:59.683 "name": "pt4", 00:11:59.683 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:59.683 "is_configured": true, 00:11:59.683 "data_offset": 2048, 00:11:59.683 "data_size": 63488 00:11:59.683 } 00:11:59.683 ] 00:11:59.683 }' 00:11:59.683 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.684 14:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.943 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:59.943 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:59.943 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:59.943 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:59.943 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:59.943 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:59.943 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:59.943 14:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:59.943 14:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.943 14:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.943 [2024-12-09 14:44:37.983492] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:59.943 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.943 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:59.943 "name": "raid_bdev1", 00:11:59.943 "aliases": [ 00:11:59.943 "c3664748-1b64-4f33-9c53-a5c825e1bd23" 00:11:59.943 ], 00:11:59.943 "product_name": "Raid Volume", 00:11:59.943 "block_size": 512, 00:11:59.943 "num_blocks": 63488, 00:11:59.943 "uuid": "c3664748-1b64-4f33-9c53-a5c825e1bd23", 00:11:59.943 "assigned_rate_limits": { 00:11:59.943 "rw_ios_per_sec": 0, 00:11:59.943 "rw_mbytes_per_sec": 0, 00:11:59.943 "r_mbytes_per_sec": 0, 00:11:59.943 "w_mbytes_per_sec": 0 00:11:59.943 }, 00:11:59.943 "claimed": false, 00:11:59.943 "zoned": false, 00:11:59.943 "supported_io_types": { 00:11:59.943 "read": true, 00:11:59.943 "write": true, 00:11:59.943 "unmap": false, 00:11:59.943 "flush": false, 00:11:59.943 "reset": true, 00:11:59.943 "nvme_admin": false, 00:11:59.943 "nvme_io": false, 00:11:59.943 "nvme_io_md": false, 00:11:59.943 "write_zeroes": true, 00:11:59.943 "zcopy": false, 00:11:59.943 "get_zone_info": false, 00:11:59.943 "zone_management": false, 00:11:59.943 "zone_append": false, 00:11:59.943 "compare": false, 00:11:59.943 "compare_and_write": false, 00:11:59.943 "abort": false, 00:11:59.943 "seek_hole": false, 00:11:59.943 "seek_data": false, 00:11:59.943 "copy": false, 00:11:59.943 "nvme_iov_md": false 00:11:59.943 }, 00:11:59.943 "memory_domains": [ 00:11:59.943 { 00:11:59.943 "dma_device_id": "system", 00:11:59.943 "dma_device_type": 1 00:11:59.943 }, 00:11:59.943 { 00:11:59.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.943 "dma_device_type": 2 00:11:59.943 }, 00:11:59.943 { 00:11:59.943 "dma_device_id": "system", 00:11:59.944 "dma_device_type": 1 00:11:59.944 }, 00:11:59.944 { 00:11:59.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.944 "dma_device_type": 2 00:11:59.944 }, 00:11:59.944 { 00:11:59.944 "dma_device_id": "system", 00:11:59.944 "dma_device_type": 1 00:11:59.944 }, 00:11:59.944 { 00:11:59.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.944 "dma_device_type": 2 00:11:59.944 }, 00:11:59.944 { 00:11:59.944 "dma_device_id": "system", 00:11:59.944 "dma_device_type": 1 00:11:59.944 }, 00:11:59.944 { 00:11:59.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.944 "dma_device_type": 2 00:11:59.944 } 00:11:59.944 ], 00:11:59.944 "driver_specific": { 00:11:59.944 "raid": { 00:11:59.944 "uuid": "c3664748-1b64-4f33-9c53-a5c825e1bd23", 00:11:59.944 "strip_size_kb": 0, 00:11:59.944 "state": "online", 00:11:59.944 "raid_level": "raid1", 00:11:59.944 "superblock": true, 00:11:59.944 "num_base_bdevs": 4, 00:11:59.944 "num_base_bdevs_discovered": 4, 00:11:59.944 "num_base_bdevs_operational": 4, 00:11:59.944 "base_bdevs_list": [ 00:11:59.944 { 00:11:59.944 "name": "pt1", 00:11:59.944 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:59.944 "is_configured": true, 00:11:59.944 "data_offset": 2048, 00:11:59.944 "data_size": 63488 00:11:59.944 }, 00:11:59.944 { 00:11:59.944 "name": "pt2", 00:11:59.944 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:59.944 "is_configured": true, 00:11:59.944 "data_offset": 2048, 00:11:59.944 "data_size": 63488 00:11:59.944 }, 00:11:59.944 { 00:11:59.944 "name": "pt3", 00:11:59.944 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:59.944 "is_configured": true, 00:11:59.944 "data_offset": 2048, 00:11:59.944 "data_size": 63488 00:11:59.944 }, 00:11:59.944 { 00:11:59.944 "name": "pt4", 00:11:59.944 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:59.944 "is_configured": true, 00:11:59.944 "data_offset": 2048, 00:11:59.944 "data_size": 63488 00:11:59.944 } 00:11:59.944 ] 00:11:59.944 } 00:11:59.944 } 00:11:59.944 }' 00:11:59.944 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:00.204 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:00.204 pt2 00:12:00.204 pt3 00:12:00.204 pt4' 00:12:00.204 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.204 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:00.204 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.204 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.204 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:00.204 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.204 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.204 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.204 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.204 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.204 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.204 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:00.204 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.204 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.204 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.204 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.204 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.204 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.204 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.204 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:00.204 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.204 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.204 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.204 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.204 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.204 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.204 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.204 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:00.204 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.204 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.204 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.204 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.204 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.204 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.204 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:00.204 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.204 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:00.204 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.204 [2024-12-09 14:44:38.302941] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:00.463 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.463 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c3664748-1b64-4f33-9c53-a5c825e1bd23 '!=' c3664748-1b64-4f33-9c53-a5c825e1bd23 ']' 00:12:00.463 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:00.463 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:00.463 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:00.463 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:00.463 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.463 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.463 [2024-12-09 14:44:38.346545] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:00.463 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.463 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:00.463 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:00.463 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:00.463 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.463 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.463 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:00.463 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.463 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.463 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.463 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.463 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.463 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.463 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.463 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.463 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.463 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.463 "name": "raid_bdev1", 00:12:00.463 "uuid": "c3664748-1b64-4f33-9c53-a5c825e1bd23", 00:12:00.463 "strip_size_kb": 0, 00:12:00.463 "state": "online", 00:12:00.463 "raid_level": "raid1", 00:12:00.463 "superblock": true, 00:12:00.463 "num_base_bdevs": 4, 00:12:00.463 "num_base_bdevs_discovered": 3, 00:12:00.463 "num_base_bdevs_operational": 3, 00:12:00.463 "base_bdevs_list": [ 00:12:00.463 { 00:12:00.463 "name": null, 00:12:00.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.463 "is_configured": false, 00:12:00.463 "data_offset": 0, 00:12:00.463 "data_size": 63488 00:12:00.463 }, 00:12:00.463 { 00:12:00.463 "name": "pt2", 00:12:00.463 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:00.463 "is_configured": true, 00:12:00.463 "data_offset": 2048, 00:12:00.463 "data_size": 63488 00:12:00.463 }, 00:12:00.463 { 00:12:00.463 "name": "pt3", 00:12:00.463 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:00.463 "is_configured": true, 00:12:00.463 "data_offset": 2048, 00:12:00.463 "data_size": 63488 00:12:00.463 }, 00:12:00.463 { 00:12:00.463 "name": "pt4", 00:12:00.463 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:00.463 "is_configured": true, 00:12:00.463 "data_offset": 2048, 00:12:00.463 "data_size": 63488 00:12:00.463 } 00:12:00.463 ] 00:12:00.463 }' 00:12:00.463 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.463 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.723 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:00.723 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.723 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.723 [2024-12-09 14:44:38.801708] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:00.723 [2024-12-09 14:44:38.801800] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:00.723 [2024-12-09 14:44:38.801936] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:00.723 [2024-12-09 14:44:38.802044] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:00.723 [2024-12-09 14:44:38.802055] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:00.723 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.723 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.723 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.723 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.723 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:00.723 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.983 [2024-12-09 14:44:38.901523] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:00.983 [2024-12-09 14:44:38.901594] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.983 [2024-12-09 14:44:38.901632] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:00.983 [2024-12-09 14:44:38.901643] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.983 [2024-12-09 14:44:38.904074] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.983 [2024-12-09 14:44:38.904114] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:00.983 [2024-12-09 14:44:38.904209] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:00.983 [2024-12-09 14:44:38.904277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:00.983 pt2 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.983 "name": "raid_bdev1", 00:12:00.983 "uuid": "c3664748-1b64-4f33-9c53-a5c825e1bd23", 00:12:00.983 "strip_size_kb": 0, 00:12:00.983 "state": "configuring", 00:12:00.983 "raid_level": "raid1", 00:12:00.983 "superblock": true, 00:12:00.983 "num_base_bdevs": 4, 00:12:00.983 "num_base_bdevs_discovered": 1, 00:12:00.983 "num_base_bdevs_operational": 3, 00:12:00.983 "base_bdevs_list": [ 00:12:00.983 { 00:12:00.983 "name": null, 00:12:00.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.983 "is_configured": false, 00:12:00.983 "data_offset": 2048, 00:12:00.983 "data_size": 63488 00:12:00.983 }, 00:12:00.983 { 00:12:00.983 "name": "pt2", 00:12:00.983 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:00.983 "is_configured": true, 00:12:00.983 "data_offset": 2048, 00:12:00.983 "data_size": 63488 00:12:00.983 }, 00:12:00.983 { 00:12:00.983 "name": null, 00:12:00.983 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:00.983 "is_configured": false, 00:12:00.983 "data_offset": 2048, 00:12:00.983 "data_size": 63488 00:12:00.983 }, 00:12:00.983 { 00:12:00.983 "name": null, 00:12:00.983 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:00.983 "is_configured": false, 00:12:00.983 "data_offset": 2048, 00:12:00.983 "data_size": 63488 00:12:00.983 } 00:12:00.983 ] 00:12:00.983 }' 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.983 14:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.243 14:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:01.243 14:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:01.243 14:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:01.243 14:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.243 14:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.243 [2024-12-09 14:44:39.352826] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:01.243 [2024-12-09 14:44:39.352959] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.243 [2024-12-09 14:44:39.353004] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:01.243 [2024-12-09 14:44:39.353037] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.243 [2024-12-09 14:44:39.353566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.243 [2024-12-09 14:44:39.353650] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:01.243 [2024-12-09 14:44:39.353786] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:01.243 [2024-12-09 14:44:39.353844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:01.243 pt3 00:12:01.243 14:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.243 14:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:01.243 14:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:01.243 14:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.243 14:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.243 14:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.243 14:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:01.243 14:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.243 14:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.243 14:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.243 14:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.502 14:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.502 14:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.502 14:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.502 14:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.502 14:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.502 14:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.502 "name": "raid_bdev1", 00:12:01.503 "uuid": "c3664748-1b64-4f33-9c53-a5c825e1bd23", 00:12:01.503 "strip_size_kb": 0, 00:12:01.503 "state": "configuring", 00:12:01.503 "raid_level": "raid1", 00:12:01.503 "superblock": true, 00:12:01.503 "num_base_bdevs": 4, 00:12:01.503 "num_base_bdevs_discovered": 2, 00:12:01.503 "num_base_bdevs_operational": 3, 00:12:01.503 "base_bdevs_list": [ 00:12:01.503 { 00:12:01.503 "name": null, 00:12:01.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.503 "is_configured": false, 00:12:01.503 "data_offset": 2048, 00:12:01.503 "data_size": 63488 00:12:01.503 }, 00:12:01.503 { 00:12:01.503 "name": "pt2", 00:12:01.503 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:01.503 "is_configured": true, 00:12:01.503 "data_offset": 2048, 00:12:01.503 "data_size": 63488 00:12:01.503 }, 00:12:01.503 { 00:12:01.503 "name": "pt3", 00:12:01.503 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:01.503 "is_configured": true, 00:12:01.503 "data_offset": 2048, 00:12:01.503 "data_size": 63488 00:12:01.503 }, 00:12:01.503 { 00:12:01.503 "name": null, 00:12:01.503 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:01.503 "is_configured": false, 00:12:01.503 "data_offset": 2048, 00:12:01.503 "data_size": 63488 00:12:01.503 } 00:12:01.503 ] 00:12:01.503 }' 00:12:01.503 14:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.503 14:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.762 14:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:01.762 14:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:01.762 14:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:12:01.762 14:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:01.762 14:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.762 14:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.762 [2024-12-09 14:44:39.836058] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:01.762 [2024-12-09 14:44:39.836193] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.762 [2024-12-09 14:44:39.836230] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:01.762 [2024-12-09 14:44:39.836241] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.762 [2024-12-09 14:44:39.836744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.762 [2024-12-09 14:44:39.836773] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:01.762 [2024-12-09 14:44:39.836869] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:01.762 [2024-12-09 14:44:39.836894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:01.762 [2024-12-09 14:44:39.837053] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:01.762 [2024-12-09 14:44:39.837063] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:01.762 [2024-12-09 14:44:39.837327] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:01.762 [2024-12-09 14:44:39.837499] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:01.762 [2024-12-09 14:44:39.837513] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:01.762 [2024-12-09 14:44:39.837681] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:01.762 pt4 00:12:01.762 14:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.762 14:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:01.762 14:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:01.762 14:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:01.762 14:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.762 14:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.762 14:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:01.762 14:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.762 14:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.762 14:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.762 14:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.762 14:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.762 14:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.762 14:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.762 14:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.762 14:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.020 14:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.020 "name": "raid_bdev1", 00:12:02.020 "uuid": "c3664748-1b64-4f33-9c53-a5c825e1bd23", 00:12:02.020 "strip_size_kb": 0, 00:12:02.020 "state": "online", 00:12:02.020 "raid_level": "raid1", 00:12:02.020 "superblock": true, 00:12:02.020 "num_base_bdevs": 4, 00:12:02.020 "num_base_bdevs_discovered": 3, 00:12:02.021 "num_base_bdevs_operational": 3, 00:12:02.021 "base_bdevs_list": [ 00:12:02.021 { 00:12:02.021 "name": null, 00:12:02.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.021 "is_configured": false, 00:12:02.021 "data_offset": 2048, 00:12:02.021 "data_size": 63488 00:12:02.021 }, 00:12:02.021 { 00:12:02.021 "name": "pt2", 00:12:02.021 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:02.021 "is_configured": true, 00:12:02.021 "data_offset": 2048, 00:12:02.021 "data_size": 63488 00:12:02.021 }, 00:12:02.021 { 00:12:02.021 "name": "pt3", 00:12:02.021 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:02.021 "is_configured": true, 00:12:02.021 "data_offset": 2048, 00:12:02.021 "data_size": 63488 00:12:02.021 }, 00:12:02.021 { 00:12:02.021 "name": "pt4", 00:12:02.021 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:02.021 "is_configured": true, 00:12:02.021 "data_offset": 2048, 00:12:02.021 "data_size": 63488 00:12:02.021 } 00:12:02.021 ] 00:12:02.021 }' 00:12:02.021 14:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.021 14:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.280 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:02.280 14:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.280 14:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.280 [2024-12-09 14:44:40.355163] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:02.280 [2024-12-09 14:44:40.355260] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:02.280 [2024-12-09 14:44:40.355376] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:02.280 [2024-12-09 14:44:40.355489] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:02.280 [2024-12-09 14:44:40.355543] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:02.280 14:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.280 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.280 14:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.280 14:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.280 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:02.280 14:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.539 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:02.539 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:02.539 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:12:02.539 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:12:02.539 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:12:02.539 14:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.539 14:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.539 14:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.539 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:02.539 14:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.539 14:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.539 [2024-12-09 14:44:40.431037] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:02.539 [2024-12-09 14:44:40.431224] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:02.539 [2024-12-09 14:44:40.431270] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:12:02.539 [2024-12-09 14:44:40.431312] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:02.539 [2024-12-09 14:44:40.433871] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:02.539 [2024-12-09 14:44:40.433958] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:02.539 [2024-12-09 14:44:40.434083] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:02.539 [2024-12-09 14:44:40.434180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:02.539 [2024-12-09 14:44:40.434375] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:02.539 [2024-12-09 14:44:40.434441] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:02.539 [2024-12-09 14:44:40.434479] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:02.539 [2024-12-09 14:44:40.434622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:02.539 [2024-12-09 14:44:40.434799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:02.539 pt1 00:12:02.539 14:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.539 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:12:02.539 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:02.539 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:02.539 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.539 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.539 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.539 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:02.539 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.539 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.539 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.539 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.539 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.539 14:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.539 14:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.539 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.539 14:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.539 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.539 "name": "raid_bdev1", 00:12:02.539 "uuid": "c3664748-1b64-4f33-9c53-a5c825e1bd23", 00:12:02.539 "strip_size_kb": 0, 00:12:02.539 "state": "configuring", 00:12:02.539 "raid_level": "raid1", 00:12:02.539 "superblock": true, 00:12:02.539 "num_base_bdevs": 4, 00:12:02.539 "num_base_bdevs_discovered": 2, 00:12:02.539 "num_base_bdevs_operational": 3, 00:12:02.539 "base_bdevs_list": [ 00:12:02.539 { 00:12:02.539 "name": null, 00:12:02.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.539 "is_configured": false, 00:12:02.539 "data_offset": 2048, 00:12:02.539 "data_size": 63488 00:12:02.539 }, 00:12:02.539 { 00:12:02.539 "name": "pt2", 00:12:02.539 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:02.539 "is_configured": true, 00:12:02.539 "data_offset": 2048, 00:12:02.539 "data_size": 63488 00:12:02.539 }, 00:12:02.539 { 00:12:02.539 "name": "pt3", 00:12:02.539 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:02.539 "is_configured": true, 00:12:02.539 "data_offset": 2048, 00:12:02.539 "data_size": 63488 00:12:02.539 }, 00:12:02.539 { 00:12:02.539 "name": null, 00:12:02.539 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:02.539 "is_configured": false, 00:12:02.539 "data_offset": 2048, 00:12:02.539 "data_size": 63488 00:12:02.539 } 00:12:02.539 ] 00:12:02.539 }' 00:12:02.539 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.539 14:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.798 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:02.798 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:02.798 14:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.798 14:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.798 14:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.060 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:03.060 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:03.060 14:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.060 14:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.060 [2024-12-09 14:44:40.930239] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:03.060 [2024-12-09 14:44:40.930324] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.060 [2024-12-09 14:44:40.930357] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:12:03.060 [2024-12-09 14:44:40.930370] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.060 [2024-12-09 14:44:40.930941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.060 [2024-12-09 14:44:40.930981] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:03.060 [2024-12-09 14:44:40.931109] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:03.060 [2024-12-09 14:44:40.931144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:03.060 [2024-12-09 14:44:40.931313] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:03.060 [2024-12-09 14:44:40.931335] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:03.060 [2024-12-09 14:44:40.931669] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:03.060 [2024-12-09 14:44:40.931868] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:03.060 [2024-12-09 14:44:40.931889] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:03.060 [2024-12-09 14:44:40.932091] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.060 pt4 00:12:03.060 14:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.061 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:03.061 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:03.061 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:03.061 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.061 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.061 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:03.061 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.061 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.061 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.061 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.061 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.061 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.061 14:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.061 14:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.061 14:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.061 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.061 "name": "raid_bdev1", 00:12:03.061 "uuid": "c3664748-1b64-4f33-9c53-a5c825e1bd23", 00:12:03.061 "strip_size_kb": 0, 00:12:03.061 "state": "online", 00:12:03.061 "raid_level": "raid1", 00:12:03.061 "superblock": true, 00:12:03.061 "num_base_bdevs": 4, 00:12:03.061 "num_base_bdevs_discovered": 3, 00:12:03.061 "num_base_bdevs_operational": 3, 00:12:03.061 "base_bdevs_list": [ 00:12:03.061 { 00:12:03.061 "name": null, 00:12:03.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.061 "is_configured": false, 00:12:03.061 "data_offset": 2048, 00:12:03.061 "data_size": 63488 00:12:03.061 }, 00:12:03.061 { 00:12:03.061 "name": "pt2", 00:12:03.061 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:03.061 "is_configured": true, 00:12:03.061 "data_offset": 2048, 00:12:03.061 "data_size": 63488 00:12:03.061 }, 00:12:03.061 { 00:12:03.061 "name": "pt3", 00:12:03.061 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:03.061 "is_configured": true, 00:12:03.061 "data_offset": 2048, 00:12:03.061 "data_size": 63488 00:12:03.061 }, 00:12:03.061 { 00:12:03.061 "name": "pt4", 00:12:03.061 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:03.061 "is_configured": true, 00:12:03.061 "data_offset": 2048, 00:12:03.061 "data_size": 63488 00:12:03.061 } 00:12:03.061 ] 00:12:03.061 }' 00:12:03.061 14:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.061 14:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.320 14:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:03.320 14:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:03.320 14:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.320 14:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.580 14:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.580 14:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:03.580 14:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:03.580 14:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:03.580 14:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.580 14:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.580 [2024-12-09 14:44:41.485619] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:03.580 14:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.580 14:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' c3664748-1b64-4f33-9c53-a5c825e1bd23 '!=' c3664748-1b64-4f33-9c53-a5c825e1bd23 ']' 00:12:03.580 14:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 75835 00:12:03.580 14:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 75835 ']' 00:12:03.580 14:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 75835 00:12:03.580 14:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:03.580 14:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:03.580 14:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75835 00:12:03.580 killing process with pid 75835 00:12:03.580 14:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:03.580 14:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:03.580 14:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75835' 00:12:03.580 14:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 75835 00:12:03.580 [2024-12-09 14:44:41.549795] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:03.580 [2024-12-09 14:44:41.549903] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:03.580 14:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 75835 00:12:03.580 [2024-12-09 14:44:41.549989] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:03.580 [2024-12-09 14:44:41.550003] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:04.148 [2024-12-09 14:44:41.971349] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:05.087 14:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:05.087 00:12:05.087 real 0m8.909s 00:12:05.087 user 0m14.050s 00:12:05.087 sys 0m1.619s 00:12:05.087 ************************************ 00:12:05.087 END TEST raid_superblock_test 00:12:05.087 ************************************ 00:12:05.087 14:44:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:05.087 14:44:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.087 14:44:43 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:12:05.087 14:44:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:05.087 14:44:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:05.087 14:44:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:05.087 ************************************ 00:12:05.087 START TEST raid_read_error_test 00:12:05.087 ************************************ 00:12:05.087 14:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:12:05.087 14:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:05.087 14:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:05.087 14:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:05.087 14:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:05.087 14:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:05.087 14:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:05.087 14:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:05.087 14:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:05.087 14:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:05.087 14:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:05.087 14:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:05.087 14:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:05.087 14:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:05.087 14:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:05.087 14:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:05.087 14:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:05.087 14:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:05.346 14:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:05.346 14:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:05.346 14:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:05.346 14:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:05.346 14:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:05.346 14:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:05.346 14:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:05.346 14:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:05.346 14:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:05.346 14:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:05.346 14:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.1mpQfXZ0gh 00:12:05.346 14:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76329 00:12:05.346 14:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:05.346 14:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76329 00:12:05.346 14:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 76329 ']' 00:12:05.346 14:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.346 14:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:05.347 14:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.347 14:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:05.347 14:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.347 [2024-12-09 14:44:43.307891] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:12:05.347 [2024-12-09 14:44:43.308091] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76329 ] 00:12:05.347 [2024-12-09 14:44:43.465722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.606 [2024-12-09 14:44:43.587264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.866 [2024-12-09 14:44:43.796229] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:05.866 [2024-12-09 14:44:43.796416] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:06.126 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:06.126 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:06.126 14:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:06.126 14:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:06.126 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.126 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.126 BaseBdev1_malloc 00:12:06.126 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.126 14:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:06.126 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.126 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.126 true 00:12:06.126 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.126 14:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:06.126 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.126 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.126 [2024-12-09 14:44:44.220231] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:06.126 [2024-12-09 14:44:44.220355] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.126 [2024-12-09 14:44:44.220383] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:06.126 [2024-12-09 14:44:44.220395] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.126 [2024-12-09 14:44:44.222530] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.126 [2024-12-09 14:44:44.222587] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:06.126 BaseBdev1 00:12:06.126 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.126 14:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:06.126 14:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:06.126 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.126 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.386 BaseBdev2_malloc 00:12:06.386 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.386 14:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:06.386 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.386 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.386 true 00:12:06.386 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.386 14:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:06.386 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.386 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.386 [2024-12-09 14:44:44.289970] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:06.386 [2024-12-09 14:44:44.290031] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.386 [2024-12-09 14:44:44.290050] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:06.386 [2024-12-09 14:44:44.290060] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.386 [2024-12-09 14:44:44.292313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.386 [2024-12-09 14:44:44.292405] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:06.386 BaseBdev2 00:12:06.386 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.386 14:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:06.386 14:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:06.386 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.386 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.386 BaseBdev3_malloc 00:12:06.386 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.386 14:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:06.386 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.386 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.386 true 00:12:06.386 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.386 14:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:06.386 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.386 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.386 [2024-12-09 14:44:44.369467] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:06.386 [2024-12-09 14:44:44.369529] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.386 [2024-12-09 14:44:44.369550] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:06.386 [2024-12-09 14:44:44.369562] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.386 [2024-12-09 14:44:44.371927] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.386 [2024-12-09 14:44:44.372020] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:06.386 BaseBdev3 00:12:06.386 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.386 14:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:06.386 14:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:06.386 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.386 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.386 BaseBdev4_malloc 00:12:06.386 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.386 14:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:06.386 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.386 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.386 true 00:12:06.386 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.386 14:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:06.386 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.386 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.386 [2024-12-09 14:44:44.439998] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:06.386 [2024-12-09 14:44:44.440061] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.386 [2024-12-09 14:44:44.440086] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:06.386 [2024-12-09 14:44:44.440103] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.386 [2024-12-09 14:44:44.442597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.386 [2024-12-09 14:44:44.442641] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:06.386 BaseBdev4 00:12:06.386 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.386 14:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:06.386 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.386 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.386 [2024-12-09 14:44:44.452026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:06.386 [2024-12-09 14:44:44.454101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:06.386 [2024-12-09 14:44:44.454185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:06.387 [2024-12-09 14:44:44.454255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:06.387 [2024-12-09 14:44:44.454527] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:06.387 [2024-12-09 14:44:44.454543] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:06.387 [2024-12-09 14:44:44.454828] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:06.387 [2024-12-09 14:44:44.455031] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:06.387 [2024-12-09 14:44:44.455041] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:06.387 [2024-12-09 14:44:44.455241] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:06.387 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.387 14:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:06.387 14:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:06.387 14:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:06.387 14:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.387 14:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.387 14:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:06.387 14:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.387 14:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.387 14:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.387 14:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.387 14:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.387 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.387 14:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.387 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.387 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.387 14:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.387 "name": "raid_bdev1", 00:12:06.387 "uuid": "b9edec44-662f-4763-9590-fbf8293b1ac7", 00:12:06.387 "strip_size_kb": 0, 00:12:06.387 "state": "online", 00:12:06.387 "raid_level": "raid1", 00:12:06.387 "superblock": true, 00:12:06.387 "num_base_bdevs": 4, 00:12:06.387 "num_base_bdevs_discovered": 4, 00:12:06.387 "num_base_bdevs_operational": 4, 00:12:06.387 "base_bdevs_list": [ 00:12:06.387 { 00:12:06.387 "name": "BaseBdev1", 00:12:06.387 "uuid": "37f52210-a6d0-5d6f-8aa2-002c134fb077", 00:12:06.387 "is_configured": true, 00:12:06.387 "data_offset": 2048, 00:12:06.387 "data_size": 63488 00:12:06.387 }, 00:12:06.387 { 00:12:06.387 "name": "BaseBdev2", 00:12:06.387 "uuid": "c434da06-943d-5ae2-a1ea-629d659ea96b", 00:12:06.387 "is_configured": true, 00:12:06.387 "data_offset": 2048, 00:12:06.387 "data_size": 63488 00:12:06.387 }, 00:12:06.387 { 00:12:06.387 "name": "BaseBdev3", 00:12:06.387 "uuid": "9971c971-1da9-5470-b370-1861236c91d5", 00:12:06.387 "is_configured": true, 00:12:06.387 "data_offset": 2048, 00:12:06.387 "data_size": 63488 00:12:06.387 }, 00:12:06.387 { 00:12:06.387 "name": "BaseBdev4", 00:12:06.387 "uuid": "ebb4b1f1-ea42-568a-8bbd-4222b04bb5f5", 00:12:06.387 "is_configured": true, 00:12:06.387 "data_offset": 2048, 00:12:06.387 "data_size": 63488 00:12:06.387 } 00:12:06.387 ] 00:12:06.387 }' 00:12:06.387 14:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.387 14:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.955 14:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:06.955 14:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:06.955 [2024-12-09 14:44:45.000675] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:07.891 14:44:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:07.891 14:44:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.891 14:44:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.891 14:44:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.891 14:44:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:07.891 14:44:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:07.891 14:44:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:07.891 14:44:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:07.891 14:44:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:07.891 14:44:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.891 14:44:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:07.891 14:44:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.891 14:44:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.891 14:44:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.891 14:44:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.891 14:44:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.891 14:44:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.891 14:44:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.891 14:44:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.891 14:44:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.891 14:44:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.891 14:44:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.891 14:44:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.891 14:44:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.891 "name": "raid_bdev1", 00:12:07.891 "uuid": "b9edec44-662f-4763-9590-fbf8293b1ac7", 00:12:07.891 "strip_size_kb": 0, 00:12:07.891 "state": "online", 00:12:07.891 "raid_level": "raid1", 00:12:07.891 "superblock": true, 00:12:07.891 "num_base_bdevs": 4, 00:12:07.891 "num_base_bdevs_discovered": 4, 00:12:07.891 "num_base_bdevs_operational": 4, 00:12:07.891 "base_bdevs_list": [ 00:12:07.891 { 00:12:07.891 "name": "BaseBdev1", 00:12:07.891 "uuid": "37f52210-a6d0-5d6f-8aa2-002c134fb077", 00:12:07.891 "is_configured": true, 00:12:07.891 "data_offset": 2048, 00:12:07.891 "data_size": 63488 00:12:07.891 }, 00:12:07.891 { 00:12:07.891 "name": "BaseBdev2", 00:12:07.891 "uuid": "c434da06-943d-5ae2-a1ea-629d659ea96b", 00:12:07.891 "is_configured": true, 00:12:07.891 "data_offset": 2048, 00:12:07.891 "data_size": 63488 00:12:07.891 }, 00:12:07.891 { 00:12:07.891 "name": "BaseBdev3", 00:12:07.891 "uuid": "9971c971-1da9-5470-b370-1861236c91d5", 00:12:07.891 "is_configured": true, 00:12:07.891 "data_offset": 2048, 00:12:07.891 "data_size": 63488 00:12:07.891 }, 00:12:07.891 { 00:12:07.891 "name": "BaseBdev4", 00:12:07.891 "uuid": "ebb4b1f1-ea42-568a-8bbd-4222b04bb5f5", 00:12:07.891 "is_configured": true, 00:12:07.891 "data_offset": 2048, 00:12:07.891 "data_size": 63488 00:12:07.891 } 00:12:07.891 ] 00:12:07.891 }' 00:12:07.891 14:44:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.891 14:44:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.184 14:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:08.184 14:44:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.184 14:44:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.443 [2024-12-09 14:44:46.299815] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:08.443 [2024-12-09 14:44:46.299853] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:08.443 [2024-12-09 14:44:46.302967] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:08.443 [2024-12-09 14:44:46.303036] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:08.443 [2024-12-09 14:44:46.303179] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:08.443 [2024-12-09 14:44:46.303194] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:08.443 { 00:12:08.443 "results": [ 00:12:08.443 { 00:12:08.443 "job": "raid_bdev1", 00:12:08.443 "core_mask": "0x1", 00:12:08.443 "workload": "randrw", 00:12:08.443 "percentage": 50, 00:12:08.443 "status": "finished", 00:12:08.443 "queue_depth": 1, 00:12:08.443 "io_size": 131072, 00:12:08.443 "runtime": 1.299593, 00:12:08.443 "iops": 9817.689076503182, 00:12:08.443 "mibps": 1227.2111345628978, 00:12:08.443 "io_failed": 0, 00:12:08.443 "io_timeout": 0, 00:12:08.443 "avg_latency_us": 98.92782140939302, 00:12:08.443 "min_latency_us": 25.041048034934498, 00:12:08.443 "max_latency_us": 1917.4288209606987 00:12:08.443 } 00:12:08.443 ], 00:12:08.443 "core_count": 1 00:12:08.443 } 00:12:08.443 14:44:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.443 14:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76329 00:12:08.443 14:44:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 76329 ']' 00:12:08.443 14:44:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 76329 00:12:08.443 14:44:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:08.443 14:44:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:08.443 14:44:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76329 00:12:08.443 14:44:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:08.443 killing process with pid 76329 00:12:08.443 14:44:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:08.443 14:44:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76329' 00:12:08.443 14:44:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 76329 00:12:08.443 [2024-12-09 14:44:46.337554] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:08.443 14:44:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 76329 00:12:08.702 [2024-12-09 14:44:46.680616] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:10.082 14:44:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.1mpQfXZ0gh 00:12:10.082 14:44:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:10.082 14:44:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:10.082 14:44:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:10.082 14:44:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:10.082 14:44:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:10.082 14:44:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:10.082 14:44:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:10.082 ************************************ 00:12:10.082 END TEST raid_read_error_test 00:12:10.082 ************************************ 00:12:10.082 00:12:10.082 real 0m4.702s 00:12:10.082 user 0m5.504s 00:12:10.082 sys 0m0.556s 00:12:10.082 14:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:10.082 14:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.082 14:44:47 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:12:10.082 14:44:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:10.082 14:44:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:10.082 14:44:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:10.082 ************************************ 00:12:10.082 START TEST raid_write_error_test 00:12:10.082 ************************************ 00:12:10.082 14:44:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:12:10.082 14:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:10.082 14:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:10.082 14:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:10.082 14:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:10.082 14:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:10.082 14:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:10.082 14:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:10.082 14:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:10.082 14:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:10.082 14:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:10.082 14:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:10.082 14:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:10.082 14:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:10.082 14:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:10.083 14:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:10.083 14:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:10.083 14:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:10.083 14:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:10.083 14:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:10.083 14:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:10.083 14:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:10.083 14:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:10.083 14:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:10.083 14:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:10.083 14:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:10.083 14:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:10.083 14:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:10.083 14:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.bC1yf6lyuX 00:12:10.083 14:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76475 00:12:10.083 14:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76475 00:12:10.083 14:44:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 76475 ']' 00:12:10.083 14:44:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.083 14:44:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:10.083 14:44:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.083 14:44:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:10.083 14:44:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.083 14:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:10.083 [2024-12-09 14:44:48.062351] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:12:10.083 [2024-12-09 14:44:48.062477] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76475 ] 00:12:10.342 [2024-12-09 14:44:48.217350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.342 [2024-12-09 14:44:48.333530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.605 [2024-12-09 14:44:48.535662] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:10.605 [2024-12-09 14:44:48.535844] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:10.865 14:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:10.865 14:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:10.865 14:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:10.865 14:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:10.865 14:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.865 14:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.865 BaseBdev1_malloc 00:12:10.865 14:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.865 14:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:10.865 14:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.865 14:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.865 true 00:12:10.865 14:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.865 14:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:10.865 14:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.865 14:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.865 [2024-12-09 14:44:48.963808] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:10.865 [2024-12-09 14:44:48.963912] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.865 [2024-12-09 14:44:48.963956] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:10.865 [2024-12-09 14:44:48.963969] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.865 [2024-12-09 14:44:48.966235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.865 [2024-12-09 14:44:48.966280] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:10.865 BaseBdev1 00:12:10.865 14:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.865 14:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:10.865 14:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:10.865 14:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.865 14:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.125 BaseBdev2_malloc 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.125 true 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.125 [2024-12-09 14:44:49.030834] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:11.125 [2024-12-09 14:44:49.030897] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.125 [2024-12-09 14:44:49.030917] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:11.125 [2024-12-09 14:44:49.030928] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.125 [2024-12-09 14:44:49.033125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.125 [2024-12-09 14:44:49.033180] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:11.125 BaseBdev2 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.125 BaseBdev3_malloc 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.125 true 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.125 [2024-12-09 14:44:49.108319] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:11.125 [2024-12-09 14:44:49.108420] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.125 [2024-12-09 14:44:49.108444] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:11.125 [2024-12-09 14:44:49.108455] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.125 [2024-12-09 14:44:49.110615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.125 [2024-12-09 14:44:49.110655] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:11.125 BaseBdev3 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.125 BaseBdev4_malloc 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.125 true 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.125 [2024-12-09 14:44:49.176422] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:11.125 [2024-12-09 14:44:49.176476] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.125 [2024-12-09 14:44:49.176493] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:11.125 [2024-12-09 14:44:49.176503] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.125 [2024-12-09 14:44:49.178598] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.125 [2024-12-09 14:44:49.178637] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:11.125 BaseBdev4 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.125 [2024-12-09 14:44:49.188455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:11.125 [2024-12-09 14:44:49.190279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:11.125 [2024-12-09 14:44:49.190445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:11.125 [2024-12-09 14:44:49.190526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:11.125 [2024-12-09 14:44:49.190772] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:11.125 [2024-12-09 14:44:49.190788] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:11.125 [2024-12-09 14:44:49.191020] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:11.125 [2024-12-09 14:44:49.191207] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:11.125 [2024-12-09 14:44:49.191216] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:11.125 [2024-12-09 14:44:49.191369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.125 14:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.125 "name": "raid_bdev1", 00:12:11.125 "uuid": "473c8061-4d8e-4930-9272-14b9f74439e4", 00:12:11.125 "strip_size_kb": 0, 00:12:11.125 "state": "online", 00:12:11.125 "raid_level": "raid1", 00:12:11.126 "superblock": true, 00:12:11.126 "num_base_bdevs": 4, 00:12:11.126 "num_base_bdevs_discovered": 4, 00:12:11.126 "num_base_bdevs_operational": 4, 00:12:11.126 "base_bdevs_list": [ 00:12:11.126 { 00:12:11.126 "name": "BaseBdev1", 00:12:11.126 "uuid": "05ede23f-dbb4-537d-8042-7ea4458cdd4c", 00:12:11.126 "is_configured": true, 00:12:11.126 "data_offset": 2048, 00:12:11.126 "data_size": 63488 00:12:11.126 }, 00:12:11.126 { 00:12:11.126 "name": "BaseBdev2", 00:12:11.126 "uuid": "cc5488fa-1ff1-5fe5-af3b-cf7a2f51db61", 00:12:11.126 "is_configured": true, 00:12:11.126 "data_offset": 2048, 00:12:11.126 "data_size": 63488 00:12:11.126 }, 00:12:11.126 { 00:12:11.126 "name": "BaseBdev3", 00:12:11.126 "uuid": "ce625929-9038-5541-a4f5-4957e43226e8", 00:12:11.126 "is_configured": true, 00:12:11.126 "data_offset": 2048, 00:12:11.126 "data_size": 63488 00:12:11.126 }, 00:12:11.126 { 00:12:11.126 "name": "BaseBdev4", 00:12:11.126 "uuid": "44dad8b0-d6ed-5766-b0fd-06ae2c0bb576", 00:12:11.126 "is_configured": true, 00:12:11.126 "data_offset": 2048, 00:12:11.126 "data_size": 63488 00:12:11.126 } 00:12:11.126 ] 00:12:11.126 }' 00:12:11.126 14:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.126 14:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.694 14:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:11.694 14:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:11.694 [2024-12-09 14:44:49.773102] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:12.632 14:44:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:12.632 14:44:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.632 14:44:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.632 [2024-12-09 14:44:50.680105] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:12.632 [2024-12-09 14:44:50.680233] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:12.632 [2024-12-09 14:44:50.680532] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:12:12.632 14:44:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.632 14:44:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:12.632 14:44:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:12.632 14:44:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:12.632 14:44:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:12.632 14:44:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:12.632 14:44:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:12.632 14:44:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:12.632 14:44:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.632 14:44:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.632 14:44:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:12.632 14:44:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.632 14:44:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.632 14:44:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.632 14:44:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.632 14:44:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.632 14:44:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.632 14:44:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.632 14:44:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.632 14:44:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.632 14:44:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.632 "name": "raid_bdev1", 00:12:12.632 "uuid": "473c8061-4d8e-4930-9272-14b9f74439e4", 00:12:12.632 "strip_size_kb": 0, 00:12:12.632 "state": "online", 00:12:12.632 "raid_level": "raid1", 00:12:12.632 "superblock": true, 00:12:12.632 "num_base_bdevs": 4, 00:12:12.632 "num_base_bdevs_discovered": 3, 00:12:12.632 "num_base_bdevs_operational": 3, 00:12:12.632 "base_bdevs_list": [ 00:12:12.632 { 00:12:12.632 "name": null, 00:12:12.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.632 "is_configured": false, 00:12:12.632 "data_offset": 0, 00:12:12.632 "data_size": 63488 00:12:12.632 }, 00:12:12.632 { 00:12:12.632 "name": "BaseBdev2", 00:12:12.632 "uuid": "cc5488fa-1ff1-5fe5-af3b-cf7a2f51db61", 00:12:12.632 "is_configured": true, 00:12:12.632 "data_offset": 2048, 00:12:12.632 "data_size": 63488 00:12:12.632 }, 00:12:12.632 { 00:12:12.632 "name": "BaseBdev3", 00:12:12.632 "uuid": "ce625929-9038-5541-a4f5-4957e43226e8", 00:12:12.632 "is_configured": true, 00:12:12.632 "data_offset": 2048, 00:12:12.632 "data_size": 63488 00:12:12.632 }, 00:12:12.632 { 00:12:12.632 "name": "BaseBdev4", 00:12:12.632 "uuid": "44dad8b0-d6ed-5766-b0fd-06ae2c0bb576", 00:12:12.632 "is_configured": true, 00:12:12.632 "data_offset": 2048, 00:12:12.632 "data_size": 63488 00:12:12.632 } 00:12:12.632 ] 00:12:12.632 }' 00:12:12.632 14:44:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.632 14:44:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.199 14:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:13.199 14:44:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.199 14:44:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.199 [2024-12-09 14:44:51.160699] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:13.199 [2024-12-09 14:44:51.160735] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:13.199 [2024-12-09 14:44:51.163687] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:13.199 [2024-12-09 14:44:51.163757] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.199 [2024-12-09 14:44:51.163889] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:13.199 [2024-12-09 14:44:51.163942] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:13.199 { 00:12:13.199 "results": [ 00:12:13.199 { 00:12:13.199 "job": "raid_bdev1", 00:12:13.199 "core_mask": "0x1", 00:12:13.199 "workload": "randrw", 00:12:13.199 "percentage": 50, 00:12:13.199 "status": "finished", 00:12:13.199 "queue_depth": 1, 00:12:13.199 "io_size": 131072, 00:12:13.199 "runtime": 1.388341, 00:12:13.199 "iops": 10989.375088684985, 00:12:13.199 "mibps": 1373.671886085623, 00:12:13.199 "io_failed": 0, 00:12:13.199 "io_timeout": 0, 00:12:13.199 "avg_latency_us": 88.20650714268746, 00:12:13.199 "min_latency_us": 24.593886462882097, 00:12:13.199 "max_latency_us": 1531.0812227074236 00:12:13.199 } 00:12:13.199 ], 00:12:13.199 "core_count": 1 00:12:13.199 } 00:12:13.199 14:44:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.199 14:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76475 00:12:13.199 14:44:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 76475 ']' 00:12:13.199 14:44:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 76475 00:12:13.199 14:44:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:13.199 14:44:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:13.199 14:44:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76475 00:12:13.199 14:44:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:13.199 14:44:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:13.199 14:44:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76475' 00:12:13.199 killing process with pid 76475 00:12:13.199 14:44:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 76475 00:12:13.199 [2024-12-09 14:44:51.197230] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:13.199 14:44:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 76475 00:12:13.457 [2024-12-09 14:44:51.532386] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:14.901 14:44:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.bC1yf6lyuX 00:12:14.901 14:44:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:14.901 14:44:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:14.901 14:44:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:14.901 ************************************ 00:12:14.901 END TEST raid_write_error_test 00:12:14.901 ************************************ 00:12:14.901 14:44:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:14.901 14:44:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:14.901 14:44:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:14.901 14:44:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:14.901 00:12:14.901 real 0m4.797s 00:12:14.901 user 0m5.710s 00:12:14.901 sys 0m0.569s 00:12:14.901 14:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.901 14:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.901 14:44:52 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:12:14.901 14:44:52 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:14.901 14:44:52 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:12:14.901 14:44:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:14.901 14:44:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.901 14:44:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:14.901 ************************************ 00:12:14.901 START TEST raid_rebuild_test 00:12:14.901 ************************************ 00:12:14.901 14:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:12:14.901 14:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:14.901 14:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:14.901 14:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:14.901 14:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:14.901 14:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:14.901 14:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:14.901 14:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:14.901 14:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:14.901 14:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:14.902 14:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:14.902 14:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:14.902 14:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:14.902 14:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:14.902 14:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:14.902 14:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:14.902 14:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:14.902 14:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:14.902 14:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:14.902 14:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:14.902 14:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:14.902 14:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:14.902 14:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:14.902 14:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:14.902 14:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=76614 00:12:14.902 14:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:14.902 14:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 76614 00:12:14.902 14:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 76614 ']' 00:12:14.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.902 14:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.902 14:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:14.902 14:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.902 14:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:14.902 14:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.902 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:14.902 Zero copy mechanism will not be used. 00:12:14.902 [2024-12-09 14:44:52.934320] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:12:14.902 [2024-12-09 14:44:52.934440] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76614 ] 00:12:15.160 [2024-12-09 14:44:53.108481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.160 [2024-12-09 14:44:53.229475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.419 [2024-12-09 14:44:53.435252] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:15.419 [2024-12-09 14:44:53.435319] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:15.987 14:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:15.987 14:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:12:15.987 14:44:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:15.987 14:44:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:15.987 14:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.987 14:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.987 BaseBdev1_malloc 00:12:15.987 14:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.987 14:44:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:15.987 14:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.987 14:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.987 [2024-12-09 14:44:53.874983] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:15.987 [2024-12-09 14:44:53.875128] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.987 [2024-12-09 14:44:53.875176] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:15.987 [2024-12-09 14:44:53.875190] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.987 [2024-12-09 14:44:53.877485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.987 [2024-12-09 14:44:53.877527] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:15.987 BaseBdev1 00:12:15.987 14:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.987 14:44:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:15.987 14:44:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:15.987 14:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.987 14:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.987 BaseBdev2_malloc 00:12:15.987 14:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.987 14:44:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:15.987 14:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.987 14:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.987 [2024-12-09 14:44:53.935689] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:15.987 [2024-12-09 14:44:53.935834] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.987 [2024-12-09 14:44:53.935869] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:15.987 [2024-12-09 14:44:53.935882] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.987 [2024-12-09 14:44:53.938199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.987 [2024-12-09 14:44:53.938242] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:15.987 BaseBdev2 00:12:15.987 14:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.987 14:44:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:15.987 14:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.987 14:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.987 spare_malloc 00:12:15.987 14:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.987 14:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:15.987 14:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.987 14:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.987 spare_delay 00:12:15.987 14:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.987 14:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:15.987 14:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.987 14:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.987 [2024-12-09 14:44:54.022918] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:15.987 [2024-12-09 14:44:54.022979] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.987 [2024-12-09 14:44:54.022999] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:15.987 [2024-12-09 14:44:54.023009] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.987 [2024-12-09 14:44:54.025224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.987 [2024-12-09 14:44:54.025264] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:15.987 spare 00:12:15.987 14:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.987 14:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:15.987 14:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.987 14:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.987 [2024-12-09 14:44:54.034946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:15.987 [2024-12-09 14:44:54.036798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:15.987 [2024-12-09 14:44:54.036887] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:15.987 [2024-12-09 14:44:54.036901] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:15.987 [2024-12-09 14:44:54.037143] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:15.987 [2024-12-09 14:44:54.037322] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:15.987 [2024-12-09 14:44:54.037333] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:15.987 [2024-12-09 14:44:54.037464] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.987 14:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.987 14:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:15.987 14:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:15.987 14:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:15.987 14:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.987 14:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.987 14:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:15.987 14:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.987 14:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.987 14:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.987 14:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.987 14:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.987 14:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.987 14:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.987 14:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.987 14:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.987 14:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.987 "name": "raid_bdev1", 00:12:15.987 "uuid": "bbcc462a-e00f-4800-a686-4497fb0e8ab0", 00:12:15.987 "strip_size_kb": 0, 00:12:15.987 "state": "online", 00:12:15.987 "raid_level": "raid1", 00:12:15.987 "superblock": false, 00:12:15.987 "num_base_bdevs": 2, 00:12:15.987 "num_base_bdevs_discovered": 2, 00:12:15.987 "num_base_bdevs_operational": 2, 00:12:15.987 "base_bdevs_list": [ 00:12:15.987 { 00:12:15.987 "name": "BaseBdev1", 00:12:15.987 "uuid": "f56e46a3-3cdc-5594-a3b9-62e8bf96092b", 00:12:15.987 "is_configured": true, 00:12:15.987 "data_offset": 0, 00:12:15.987 "data_size": 65536 00:12:15.987 }, 00:12:15.987 { 00:12:15.987 "name": "BaseBdev2", 00:12:15.987 "uuid": "e6a7edad-4649-5b31-9ce7-e29785913185", 00:12:15.987 "is_configured": true, 00:12:15.987 "data_offset": 0, 00:12:15.987 "data_size": 65536 00:12:15.987 } 00:12:15.987 ] 00:12:15.987 }' 00:12:15.987 14:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.987 14:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.554 14:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:16.554 14:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:16.554 14:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.554 14:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.554 [2024-12-09 14:44:54.466524] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:16.554 14:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.554 14:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:16.554 14:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.554 14:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.554 14:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.554 14:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:16.554 14:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.554 14:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:16.554 14:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:16.554 14:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:16.554 14:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:16.554 14:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:16.554 14:44:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:16.554 14:44:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:16.554 14:44:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:16.554 14:44:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:16.554 14:44:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:16.554 14:44:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:16.554 14:44:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:16.554 14:44:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:16.554 14:44:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:16.813 [2024-12-09 14:44:54.769850] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:16.813 /dev/nbd0 00:12:16.813 14:44:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:16.813 14:44:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:16.813 14:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:16.813 14:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:16.813 14:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:16.813 14:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:16.813 14:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:16.813 14:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:16.813 14:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:16.813 14:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:16.813 14:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:16.813 1+0 records in 00:12:16.813 1+0 records out 00:12:16.813 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000638171 s, 6.4 MB/s 00:12:16.813 14:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:16.813 14:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:16.813 14:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:16.813 14:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:16.813 14:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:16.813 14:44:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:16.813 14:44:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:16.813 14:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:16.813 14:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:16.813 14:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:22.099 65536+0 records in 00:12:22.099 65536+0 records out 00:12:22.099 33554432 bytes (34 MB, 32 MiB) copied, 4.54341 s, 7.4 MB/s 00:12:22.099 14:44:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:22.099 14:44:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:22.099 14:44:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:22.099 14:44:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:22.099 14:44:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:22.099 14:44:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:22.099 14:44:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:22.099 [2024-12-09 14:44:59.602191] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:22.099 14:44:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:22.099 14:44:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:22.099 14:44:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:22.099 14:44:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:22.099 14:44:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:22.099 14:44:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:22.099 14:44:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:22.099 14:44:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:22.099 14:44:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:22.099 14:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.099 14:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.099 [2024-12-09 14:44:59.633391] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:22.099 14:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.099 14:44:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:22.099 14:44:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:22.099 14:44:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:22.099 14:44:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.099 14:44:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.099 14:44:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:22.099 14:44:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.099 14:44:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.099 14:44:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.099 14:44:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.099 14:44:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.099 14:44:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.099 14:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.099 14:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.099 14:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.099 14:44:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.099 "name": "raid_bdev1", 00:12:22.099 "uuid": "bbcc462a-e00f-4800-a686-4497fb0e8ab0", 00:12:22.099 "strip_size_kb": 0, 00:12:22.099 "state": "online", 00:12:22.099 "raid_level": "raid1", 00:12:22.099 "superblock": false, 00:12:22.099 "num_base_bdevs": 2, 00:12:22.099 "num_base_bdevs_discovered": 1, 00:12:22.099 "num_base_bdevs_operational": 1, 00:12:22.099 "base_bdevs_list": [ 00:12:22.099 { 00:12:22.099 "name": null, 00:12:22.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.099 "is_configured": false, 00:12:22.099 "data_offset": 0, 00:12:22.099 "data_size": 65536 00:12:22.099 }, 00:12:22.099 { 00:12:22.099 "name": "BaseBdev2", 00:12:22.099 "uuid": "e6a7edad-4649-5b31-9ce7-e29785913185", 00:12:22.099 "is_configured": true, 00:12:22.099 "data_offset": 0, 00:12:22.099 "data_size": 65536 00:12:22.099 } 00:12:22.099 ] 00:12:22.099 }' 00:12:22.099 14:44:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.099 14:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.099 14:45:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:22.099 14:45:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.099 14:45:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.099 [2024-12-09 14:45:00.108645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:22.100 [2024-12-09 14:45:00.127518] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:12:22.100 14:45:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.100 14:45:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:22.100 [2024-12-09 14:45:00.129815] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:23.038 14:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:23.038 14:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:23.038 14:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:23.038 14:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:23.038 14:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:23.038 14:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.038 14:45:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.038 14:45:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.038 14:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.038 14:45:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.298 14:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:23.298 "name": "raid_bdev1", 00:12:23.298 "uuid": "bbcc462a-e00f-4800-a686-4497fb0e8ab0", 00:12:23.298 "strip_size_kb": 0, 00:12:23.298 "state": "online", 00:12:23.298 "raid_level": "raid1", 00:12:23.298 "superblock": false, 00:12:23.298 "num_base_bdevs": 2, 00:12:23.298 "num_base_bdevs_discovered": 2, 00:12:23.298 "num_base_bdevs_operational": 2, 00:12:23.298 "process": { 00:12:23.298 "type": "rebuild", 00:12:23.298 "target": "spare", 00:12:23.298 "progress": { 00:12:23.298 "blocks": 20480, 00:12:23.298 "percent": 31 00:12:23.298 } 00:12:23.298 }, 00:12:23.298 "base_bdevs_list": [ 00:12:23.298 { 00:12:23.298 "name": "spare", 00:12:23.298 "uuid": "d0bb49f9-1950-5b04-824d-41a736d44c9c", 00:12:23.298 "is_configured": true, 00:12:23.298 "data_offset": 0, 00:12:23.298 "data_size": 65536 00:12:23.298 }, 00:12:23.298 { 00:12:23.298 "name": "BaseBdev2", 00:12:23.298 "uuid": "e6a7edad-4649-5b31-9ce7-e29785913185", 00:12:23.298 "is_configured": true, 00:12:23.298 "data_offset": 0, 00:12:23.298 "data_size": 65536 00:12:23.298 } 00:12:23.298 ] 00:12:23.298 }' 00:12:23.298 14:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:23.298 14:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:23.298 14:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:23.298 14:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:23.298 14:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:23.298 14:45:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.298 14:45:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.298 [2024-12-09 14:45:01.280807] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:23.298 [2024-12-09 14:45:01.335892] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:23.298 [2024-12-09 14:45:01.336009] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.298 [2024-12-09 14:45:01.336028] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:23.298 [2024-12-09 14:45:01.336040] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:23.298 14:45:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.298 14:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:23.298 14:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.298 14:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.298 14:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.298 14:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.298 14:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:23.298 14:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.298 14:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.298 14:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.298 14:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.298 14:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.298 14:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.298 14:45:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.298 14:45:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.298 14:45:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.557 14:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.557 "name": "raid_bdev1", 00:12:23.557 "uuid": "bbcc462a-e00f-4800-a686-4497fb0e8ab0", 00:12:23.557 "strip_size_kb": 0, 00:12:23.557 "state": "online", 00:12:23.557 "raid_level": "raid1", 00:12:23.557 "superblock": false, 00:12:23.557 "num_base_bdevs": 2, 00:12:23.557 "num_base_bdevs_discovered": 1, 00:12:23.557 "num_base_bdevs_operational": 1, 00:12:23.557 "base_bdevs_list": [ 00:12:23.557 { 00:12:23.557 "name": null, 00:12:23.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.557 "is_configured": false, 00:12:23.557 "data_offset": 0, 00:12:23.557 "data_size": 65536 00:12:23.557 }, 00:12:23.557 { 00:12:23.557 "name": "BaseBdev2", 00:12:23.557 "uuid": "e6a7edad-4649-5b31-9ce7-e29785913185", 00:12:23.557 "is_configured": true, 00:12:23.557 "data_offset": 0, 00:12:23.557 "data_size": 65536 00:12:23.557 } 00:12:23.557 ] 00:12:23.557 }' 00:12:23.557 14:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.557 14:45:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.816 14:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:23.816 14:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:23.817 14:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:23.817 14:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:23.817 14:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:23.817 14:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.817 14:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.817 14:45:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.817 14:45:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.817 14:45:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.817 14:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:23.817 "name": "raid_bdev1", 00:12:23.817 "uuid": "bbcc462a-e00f-4800-a686-4497fb0e8ab0", 00:12:23.817 "strip_size_kb": 0, 00:12:23.817 "state": "online", 00:12:23.817 "raid_level": "raid1", 00:12:23.817 "superblock": false, 00:12:23.817 "num_base_bdevs": 2, 00:12:23.817 "num_base_bdevs_discovered": 1, 00:12:23.817 "num_base_bdevs_operational": 1, 00:12:23.817 "base_bdevs_list": [ 00:12:23.817 { 00:12:23.817 "name": null, 00:12:23.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.817 "is_configured": false, 00:12:23.817 "data_offset": 0, 00:12:23.817 "data_size": 65536 00:12:23.817 }, 00:12:23.817 { 00:12:23.817 "name": "BaseBdev2", 00:12:23.817 "uuid": "e6a7edad-4649-5b31-9ce7-e29785913185", 00:12:23.817 "is_configured": true, 00:12:23.817 "data_offset": 0, 00:12:23.817 "data_size": 65536 00:12:23.817 } 00:12:23.817 ] 00:12:23.817 }' 00:12:23.817 14:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:24.076 14:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:24.076 14:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:24.076 14:45:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:24.076 14:45:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:24.076 14:45:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.076 14:45:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.076 [2024-12-09 14:45:02.027612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:24.076 [2024-12-09 14:45:02.045310] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:24.076 14:45:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.076 14:45:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:24.076 [2024-12-09 14:45:02.047342] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:25.012 14:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:25.012 14:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:25.012 14:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:25.012 14:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:25.012 14:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:25.012 14:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.012 14:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.012 14:45:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.012 14:45:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.012 14:45:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.012 14:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:25.012 "name": "raid_bdev1", 00:12:25.012 "uuid": "bbcc462a-e00f-4800-a686-4497fb0e8ab0", 00:12:25.012 "strip_size_kb": 0, 00:12:25.012 "state": "online", 00:12:25.012 "raid_level": "raid1", 00:12:25.012 "superblock": false, 00:12:25.012 "num_base_bdevs": 2, 00:12:25.012 "num_base_bdevs_discovered": 2, 00:12:25.012 "num_base_bdevs_operational": 2, 00:12:25.012 "process": { 00:12:25.012 "type": "rebuild", 00:12:25.012 "target": "spare", 00:12:25.012 "progress": { 00:12:25.013 "blocks": 20480, 00:12:25.013 "percent": 31 00:12:25.013 } 00:12:25.013 }, 00:12:25.013 "base_bdevs_list": [ 00:12:25.013 { 00:12:25.013 "name": "spare", 00:12:25.013 "uuid": "d0bb49f9-1950-5b04-824d-41a736d44c9c", 00:12:25.013 "is_configured": true, 00:12:25.013 "data_offset": 0, 00:12:25.013 "data_size": 65536 00:12:25.013 }, 00:12:25.013 { 00:12:25.013 "name": "BaseBdev2", 00:12:25.013 "uuid": "e6a7edad-4649-5b31-9ce7-e29785913185", 00:12:25.013 "is_configured": true, 00:12:25.013 "data_offset": 0, 00:12:25.013 "data_size": 65536 00:12:25.013 } 00:12:25.013 ] 00:12:25.013 }' 00:12:25.013 14:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:25.272 14:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:25.272 14:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:25.272 14:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:25.272 14:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:25.272 14:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:25.272 14:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:25.272 14:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:25.272 14:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=375 00:12:25.272 14:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:25.272 14:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:25.272 14:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:25.272 14:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:25.272 14:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:25.272 14:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:25.272 14:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.272 14:45:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.272 14:45:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.272 14:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.272 14:45:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.272 14:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:25.272 "name": "raid_bdev1", 00:12:25.272 "uuid": "bbcc462a-e00f-4800-a686-4497fb0e8ab0", 00:12:25.272 "strip_size_kb": 0, 00:12:25.272 "state": "online", 00:12:25.272 "raid_level": "raid1", 00:12:25.272 "superblock": false, 00:12:25.272 "num_base_bdevs": 2, 00:12:25.272 "num_base_bdevs_discovered": 2, 00:12:25.272 "num_base_bdevs_operational": 2, 00:12:25.272 "process": { 00:12:25.272 "type": "rebuild", 00:12:25.272 "target": "spare", 00:12:25.272 "progress": { 00:12:25.272 "blocks": 22528, 00:12:25.272 "percent": 34 00:12:25.272 } 00:12:25.272 }, 00:12:25.272 "base_bdevs_list": [ 00:12:25.272 { 00:12:25.272 "name": "spare", 00:12:25.272 "uuid": "d0bb49f9-1950-5b04-824d-41a736d44c9c", 00:12:25.272 "is_configured": true, 00:12:25.272 "data_offset": 0, 00:12:25.272 "data_size": 65536 00:12:25.272 }, 00:12:25.272 { 00:12:25.272 "name": "BaseBdev2", 00:12:25.272 "uuid": "e6a7edad-4649-5b31-9ce7-e29785913185", 00:12:25.272 "is_configured": true, 00:12:25.272 "data_offset": 0, 00:12:25.272 "data_size": 65536 00:12:25.272 } 00:12:25.272 ] 00:12:25.272 }' 00:12:25.272 14:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:25.272 14:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:25.272 14:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:25.272 14:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:25.272 14:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:26.248 14:45:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:26.248 14:45:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:26.248 14:45:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:26.248 14:45:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:26.248 14:45:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:26.248 14:45:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:26.248 14:45:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.248 14:45:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.248 14:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.248 14:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.248 14:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.507 14:45:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:26.507 "name": "raid_bdev1", 00:12:26.507 "uuid": "bbcc462a-e00f-4800-a686-4497fb0e8ab0", 00:12:26.507 "strip_size_kb": 0, 00:12:26.507 "state": "online", 00:12:26.507 "raid_level": "raid1", 00:12:26.507 "superblock": false, 00:12:26.507 "num_base_bdevs": 2, 00:12:26.507 "num_base_bdevs_discovered": 2, 00:12:26.507 "num_base_bdevs_operational": 2, 00:12:26.507 "process": { 00:12:26.507 "type": "rebuild", 00:12:26.507 "target": "spare", 00:12:26.507 "progress": { 00:12:26.507 "blocks": 45056, 00:12:26.507 "percent": 68 00:12:26.507 } 00:12:26.507 }, 00:12:26.507 "base_bdevs_list": [ 00:12:26.507 { 00:12:26.507 "name": "spare", 00:12:26.507 "uuid": "d0bb49f9-1950-5b04-824d-41a736d44c9c", 00:12:26.507 "is_configured": true, 00:12:26.507 "data_offset": 0, 00:12:26.507 "data_size": 65536 00:12:26.507 }, 00:12:26.507 { 00:12:26.508 "name": "BaseBdev2", 00:12:26.508 "uuid": "e6a7edad-4649-5b31-9ce7-e29785913185", 00:12:26.508 "is_configured": true, 00:12:26.508 "data_offset": 0, 00:12:26.508 "data_size": 65536 00:12:26.508 } 00:12:26.508 ] 00:12:26.508 }' 00:12:26.508 14:45:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:26.508 14:45:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:26.508 14:45:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:26.508 14:45:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:26.508 14:45:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:27.445 [2024-12-09 14:45:05.262886] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:27.445 [2024-12-09 14:45:05.262976] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:27.445 [2024-12-09 14:45:05.263032] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.445 14:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:27.445 14:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:27.445 14:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:27.445 14:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:27.445 14:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:27.445 14:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:27.445 14:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.445 14:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.445 14:45:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.445 14:45:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.445 14:45:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.445 14:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:27.445 "name": "raid_bdev1", 00:12:27.445 "uuid": "bbcc462a-e00f-4800-a686-4497fb0e8ab0", 00:12:27.445 "strip_size_kb": 0, 00:12:27.445 "state": "online", 00:12:27.445 "raid_level": "raid1", 00:12:27.445 "superblock": false, 00:12:27.445 "num_base_bdevs": 2, 00:12:27.445 "num_base_bdevs_discovered": 2, 00:12:27.445 "num_base_bdevs_operational": 2, 00:12:27.445 "base_bdevs_list": [ 00:12:27.445 { 00:12:27.445 "name": "spare", 00:12:27.445 "uuid": "d0bb49f9-1950-5b04-824d-41a736d44c9c", 00:12:27.445 "is_configured": true, 00:12:27.445 "data_offset": 0, 00:12:27.445 "data_size": 65536 00:12:27.445 }, 00:12:27.445 { 00:12:27.445 "name": "BaseBdev2", 00:12:27.445 "uuid": "e6a7edad-4649-5b31-9ce7-e29785913185", 00:12:27.445 "is_configured": true, 00:12:27.445 "data_offset": 0, 00:12:27.445 "data_size": 65536 00:12:27.445 } 00:12:27.445 ] 00:12:27.445 }' 00:12:27.445 14:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:27.445 14:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:27.445 14:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:27.703 14:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:27.703 14:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:27.703 14:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:27.703 14:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:27.703 14:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:27.703 14:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:27.703 14:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:27.703 14:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.703 14:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.703 14:45:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.703 14:45:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.703 14:45:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.703 14:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:27.703 "name": "raid_bdev1", 00:12:27.703 "uuid": "bbcc462a-e00f-4800-a686-4497fb0e8ab0", 00:12:27.703 "strip_size_kb": 0, 00:12:27.703 "state": "online", 00:12:27.703 "raid_level": "raid1", 00:12:27.703 "superblock": false, 00:12:27.703 "num_base_bdevs": 2, 00:12:27.703 "num_base_bdevs_discovered": 2, 00:12:27.703 "num_base_bdevs_operational": 2, 00:12:27.703 "base_bdevs_list": [ 00:12:27.703 { 00:12:27.703 "name": "spare", 00:12:27.703 "uuid": "d0bb49f9-1950-5b04-824d-41a736d44c9c", 00:12:27.703 "is_configured": true, 00:12:27.703 "data_offset": 0, 00:12:27.703 "data_size": 65536 00:12:27.703 }, 00:12:27.703 { 00:12:27.703 "name": "BaseBdev2", 00:12:27.703 "uuid": "e6a7edad-4649-5b31-9ce7-e29785913185", 00:12:27.703 "is_configured": true, 00:12:27.703 "data_offset": 0, 00:12:27.703 "data_size": 65536 00:12:27.703 } 00:12:27.703 ] 00:12:27.703 }' 00:12:27.703 14:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:27.703 14:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:27.703 14:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:27.703 14:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:27.703 14:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:27.703 14:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.703 14:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.703 14:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.704 14:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.704 14:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:27.704 14:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.704 14:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.704 14:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.704 14:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.704 14:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.704 14:45:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.704 14:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.704 14:45:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.704 14:45:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.704 14:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.704 "name": "raid_bdev1", 00:12:27.704 "uuid": "bbcc462a-e00f-4800-a686-4497fb0e8ab0", 00:12:27.704 "strip_size_kb": 0, 00:12:27.704 "state": "online", 00:12:27.704 "raid_level": "raid1", 00:12:27.704 "superblock": false, 00:12:27.704 "num_base_bdevs": 2, 00:12:27.704 "num_base_bdevs_discovered": 2, 00:12:27.704 "num_base_bdevs_operational": 2, 00:12:27.704 "base_bdevs_list": [ 00:12:27.704 { 00:12:27.704 "name": "spare", 00:12:27.704 "uuid": "d0bb49f9-1950-5b04-824d-41a736d44c9c", 00:12:27.704 "is_configured": true, 00:12:27.704 "data_offset": 0, 00:12:27.704 "data_size": 65536 00:12:27.704 }, 00:12:27.704 { 00:12:27.704 "name": "BaseBdev2", 00:12:27.704 "uuid": "e6a7edad-4649-5b31-9ce7-e29785913185", 00:12:27.704 "is_configured": true, 00:12:27.704 "data_offset": 0, 00:12:27.704 "data_size": 65536 00:12:27.704 } 00:12:27.704 ] 00:12:27.704 }' 00:12:27.704 14:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.704 14:45:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.272 14:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:28.272 14:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.272 14:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.272 [2024-12-09 14:45:06.161289] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:28.272 [2024-12-09 14:45:06.161404] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:28.272 [2024-12-09 14:45:06.161530] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:28.272 [2024-12-09 14:45:06.161648] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:28.272 [2024-12-09 14:45:06.161710] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:28.272 14:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.272 14:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:28.272 14:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.272 14:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.272 14:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.272 14:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.272 14:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:28.272 14:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:28.273 14:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:28.273 14:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:28.273 14:45:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:28.273 14:45:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:28.273 14:45:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:28.273 14:45:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:28.273 14:45:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:28.273 14:45:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:28.273 14:45:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:28.273 14:45:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:28.273 14:45:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:28.533 /dev/nbd0 00:12:28.533 14:45:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:28.533 14:45:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:28.533 14:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:28.533 14:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:28.533 14:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:28.533 14:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:28.533 14:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:28.533 14:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:28.533 14:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:28.533 14:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:28.533 14:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:28.533 1+0 records in 00:12:28.533 1+0 records out 00:12:28.533 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390959 s, 10.5 MB/s 00:12:28.533 14:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.533 14:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:28.533 14:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.533 14:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:28.533 14:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:28.533 14:45:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:28.533 14:45:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:28.533 14:45:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:28.793 /dev/nbd1 00:12:28.793 14:45:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:28.793 14:45:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:28.793 14:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:28.793 14:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:28.793 14:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:28.793 14:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:28.793 14:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:28.793 14:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:28.793 14:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:28.793 14:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:28.793 14:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:28.793 1+0 records in 00:12:28.793 1+0 records out 00:12:28.793 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253079 s, 16.2 MB/s 00:12:28.793 14:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.793 14:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:28.793 14:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.793 14:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:28.793 14:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:28.793 14:45:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:28.793 14:45:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:28.793 14:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:28.793 14:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:28.793 14:45:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:28.793 14:45:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:28.793 14:45:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:28.793 14:45:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:28.793 14:45:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:28.793 14:45:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:29.053 14:45:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:29.053 14:45:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:29.053 14:45:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:29.053 14:45:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:29.053 14:45:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:29.053 14:45:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:29.053 14:45:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:29.053 14:45:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:29.053 14:45:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:29.053 14:45:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:29.313 14:45:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:29.313 14:45:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:29.313 14:45:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:29.313 14:45:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:29.313 14:45:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:29.313 14:45:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:29.313 14:45:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:29.313 14:45:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:29.313 14:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:29.313 14:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 76614 00:12:29.313 14:45:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 76614 ']' 00:12:29.313 14:45:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 76614 00:12:29.313 14:45:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:12:29.313 14:45:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:29.313 14:45:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76614 00:12:29.313 killing process with pid 76614 00:12:29.313 Received shutdown signal, test time was about 60.000000 seconds 00:12:29.313 00:12:29.313 Latency(us) 00:12:29.313 [2024-12-09T14:45:07.435Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:29.313 [2024-12-09T14:45:07.435Z] =================================================================================================================== 00:12:29.313 [2024-12-09T14:45:07.435Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:29.313 14:45:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:29.313 14:45:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:29.313 14:45:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76614' 00:12:29.313 14:45:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 76614 00:12:29.313 [2024-12-09 14:45:07.402665] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:29.313 14:45:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 76614 00:12:29.883 [2024-12-09 14:45:07.723100] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:30.821 14:45:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:30.821 00:12:30.821 real 0m16.091s 00:12:30.821 user 0m18.140s 00:12:30.821 sys 0m3.025s 00:12:30.821 ************************************ 00:12:30.821 END TEST raid_rebuild_test 00:12:30.821 ************************************ 00:12:30.821 14:45:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:30.821 14:45:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.080 14:45:08 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:12:31.080 14:45:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:31.080 14:45:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:31.080 14:45:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:31.080 ************************************ 00:12:31.080 START TEST raid_rebuild_test_sb 00:12:31.080 ************************************ 00:12:31.080 14:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:12:31.080 14:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:31.080 14:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:31.080 14:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:31.080 14:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:31.080 14:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:31.080 14:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:31.080 14:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:31.080 14:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:31.080 14:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:31.080 14:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:31.080 14:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:31.080 14:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:31.080 14:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:31.080 14:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:31.080 14:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:31.080 14:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:31.080 14:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:31.080 14:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:31.080 14:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:31.080 14:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:31.080 14:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:31.080 14:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:31.080 14:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:31.080 14:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:31.080 14:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:31.080 14:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=77042 00:12:31.080 14:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 77042 00:12:31.080 14:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 77042 ']' 00:12:31.080 14:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.080 14:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:31.080 14:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.080 14:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:31.080 14:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.080 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:31.080 Zero copy mechanism will not be used. 00:12:31.080 [2024-12-09 14:45:09.112847] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:12:31.080 [2024-12-09 14:45:09.112991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77042 ] 00:12:31.339 [2024-12-09 14:45:09.289275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.339 [2024-12-09 14:45:09.418683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.599 [2024-12-09 14:45:09.635297] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:31.599 [2024-12-09 14:45:09.635351] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:31.858 14:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:31.858 14:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:31.858 14:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:31.858 14:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:31.858 14:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.858 14:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.116 BaseBdev1_malloc 00:12:32.116 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.116 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:32.116 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.116 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.116 [2024-12-09 14:45:10.016674] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:32.116 [2024-12-09 14:45:10.016784] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.116 [2024-12-09 14:45:10.016848] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:32.116 [2024-12-09 14:45:10.016887] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.116 [2024-12-09 14:45:10.019193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.117 [2024-12-09 14:45:10.019274] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:32.117 BaseBdev1 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.117 BaseBdev2_malloc 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.117 [2024-12-09 14:45:10.076810] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:32.117 [2024-12-09 14:45:10.076943] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.117 [2024-12-09 14:45:10.077027] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:32.117 [2024-12-09 14:45:10.077072] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.117 [2024-12-09 14:45:10.079651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.117 [2024-12-09 14:45:10.079749] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:32.117 BaseBdev2 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.117 spare_malloc 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.117 spare_delay 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.117 [2024-12-09 14:45:10.161517] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:32.117 [2024-12-09 14:45:10.161650] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.117 [2024-12-09 14:45:10.161717] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:32.117 [2024-12-09 14:45:10.161758] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.117 [2024-12-09 14:45:10.164183] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.117 [2024-12-09 14:45:10.164281] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:32.117 spare 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.117 [2024-12-09 14:45:10.173551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:32.117 [2024-12-09 14:45:10.175635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:32.117 [2024-12-09 14:45:10.175837] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:32.117 [2024-12-09 14:45:10.175854] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:32.117 [2024-12-09 14:45:10.176146] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:32.117 [2024-12-09 14:45:10.176338] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:32.117 [2024-12-09 14:45:10.176348] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:32.117 [2024-12-09 14:45:10.176546] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.117 "name": "raid_bdev1", 00:12:32.117 "uuid": "64e1cab5-cce4-4624-8287-8ec70573f992", 00:12:32.117 "strip_size_kb": 0, 00:12:32.117 "state": "online", 00:12:32.117 "raid_level": "raid1", 00:12:32.117 "superblock": true, 00:12:32.117 "num_base_bdevs": 2, 00:12:32.117 "num_base_bdevs_discovered": 2, 00:12:32.117 "num_base_bdevs_operational": 2, 00:12:32.117 "base_bdevs_list": [ 00:12:32.117 { 00:12:32.117 "name": "BaseBdev1", 00:12:32.117 "uuid": "aa71e3c4-2907-5070-889c-4db55d862b77", 00:12:32.117 "is_configured": true, 00:12:32.117 "data_offset": 2048, 00:12:32.117 "data_size": 63488 00:12:32.117 }, 00:12:32.117 { 00:12:32.117 "name": "BaseBdev2", 00:12:32.117 "uuid": "ccb567d5-964f-51d2-bedf-95e072b0b8a0", 00:12:32.117 "is_configured": true, 00:12:32.117 "data_offset": 2048, 00:12:32.117 "data_size": 63488 00:12:32.117 } 00:12:32.117 ] 00:12:32.117 }' 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.117 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.686 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:32.686 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:32.686 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.686 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.686 [2024-12-09 14:45:10.637062] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:32.686 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.686 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:32.686 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.686 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:32.686 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.686 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.686 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.686 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:32.686 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:32.686 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:32.686 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:32.686 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:32.686 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:32.686 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:32.686 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:32.686 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:32.686 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:32.686 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:32.686 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:32.686 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:32.686 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:32.946 [2024-12-09 14:45:10.940283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:32.946 /dev/nbd0 00:12:32.946 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:32.946 14:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:32.946 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:32.946 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:32.946 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:32.946 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:32.946 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:32.946 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:32.946 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:32.946 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:32.946 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:32.946 1+0 records in 00:12:32.946 1+0 records out 00:12:32.946 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328772 s, 12.5 MB/s 00:12:32.946 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.946 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:32.946 14:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.946 14:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:32.946 14:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:32.946 14:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:32.946 14:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:32.946 14:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:32.946 14:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:32.946 14:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:38.267 63488+0 records in 00:12:38.267 63488+0 records out 00:12:38.267 32505856 bytes (33 MB, 31 MiB) copied, 4.34232 s, 7.5 MB/s 00:12:38.267 14:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:38.267 14:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:38.267 14:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:38.267 14:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:38.267 14:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:38.267 14:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:38.267 14:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:38.267 [2024-12-09 14:45:15.584935] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.267 14:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:38.267 14:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:38.267 14:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:38.267 14:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:38.267 14:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:38.267 14:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:38.267 14:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:38.267 14:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:38.267 14:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:38.267 14:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.267 14:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.267 [2024-12-09 14:45:15.605021] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:38.267 14:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.267 14:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:38.267 14:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.267 14:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.267 14:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.267 14:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.267 14:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:38.267 14:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.267 14:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.267 14:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.267 14:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.267 14:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.267 14:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.267 14:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.267 14:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.267 14:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.267 14:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.267 "name": "raid_bdev1", 00:12:38.267 "uuid": "64e1cab5-cce4-4624-8287-8ec70573f992", 00:12:38.267 "strip_size_kb": 0, 00:12:38.267 "state": "online", 00:12:38.267 "raid_level": "raid1", 00:12:38.267 "superblock": true, 00:12:38.267 "num_base_bdevs": 2, 00:12:38.267 "num_base_bdevs_discovered": 1, 00:12:38.267 "num_base_bdevs_operational": 1, 00:12:38.267 "base_bdevs_list": [ 00:12:38.267 { 00:12:38.267 "name": null, 00:12:38.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.267 "is_configured": false, 00:12:38.267 "data_offset": 0, 00:12:38.267 "data_size": 63488 00:12:38.267 }, 00:12:38.267 { 00:12:38.267 "name": "BaseBdev2", 00:12:38.267 "uuid": "ccb567d5-964f-51d2-bedf-95e072b0b8a0", 00:12:38.267 "is_configured": true, 00:12:38.267 "data_offset": 2048, 00:12:38.267 "data_size": 63488 00:12:38.267 } 00:12:38.267 ] 00:12:38.267 }' 00:12:38.267 14:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.267 14:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.267 14:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:38.267 14:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.267 14:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.267 [2024-12-09 14:45:16.080270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:38.267 [2024-12-09 14:45:16.100277] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:12:38.267 14:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.267 [2024-12-09 14:45:16.102412] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:38.267 14:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:39.207 14:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:39.207 14:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:39.207 14:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:39.207 14:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:39.207 14:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:39.207 14:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.207 14:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.207 14:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.207 14:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.207 14:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.207 14:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:39.207 "name": "raid_bdev1", 00:12:39.207 "uuid": "64e1cab5-cce4-4624-8287-8ec70573f992", 00:12:39.207 "strip_size_kb": 0, 00:12:39.207 "state": "online", 00:12:39.207 "raid_level": "raid1", 00:12:39.207 "superblock": true, 00:12:39.207 "num_base_bdevs": 2, 00:12:39.207 "num_base_bdevs_discovered": 2, 00:12:39.207 "num_base_bdevs_operational": 2, 00:12:39.207 "process": { 00:12:39.207 "type": "rebuild", 00:12:39.207 "target": "spare", 00:12:39.207 "progress": { 00:12:39.207 "blocks": 20480, 00:12:39.207 "percent": 32 00:12:39.207 } 00:12:39.207 }, 00:12:39.207 "base_bdevs_list": [ 00:12:39.207 { 00:12:39.207 "name": "spare", 00:12:39.208 "uuid": "d162c4d8-e682-5b9f-866d-c362e9b8d274", 00:12:39.208 "is_configured": true, 00:12:39.208 "data_offset": 2048, 00:12:39.208 "data_size": 63488 00:12:39.208 }, 00:12:39.208 { 00:12:39.208 "name": "BaseBdev2", 00:12:39.208 "uuid": "ccb567d5-964f-51d2-bedf-95e072b0b8a0", 00:12:39.208 "is_configured": true, 00:12:39.208 "data_offset": 2048, 00:12:39.208 "data_size": 63488 00:12:39.208 } 00:12:39.208 ] 00:12:39.208 }' 00:12:39.208 14:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:39.208 14:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:39.208 14:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:39.208 14:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:39.208 14:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:39.208 14:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.208 14:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.208 [2024-12-09 14:45:17.266108] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:39.208 [2024-12-09 14:45:17.308577] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:39.208 [2024-12-09 14:45:17.308666] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:39.208 [2024-12-09 14:45:17.308681] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:39.208 [2024-12-09 14:45:17.308691] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:39.467 14:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.467 14:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:39.467 14:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:39.467 14:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:39.467 14:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:39.467 14:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:39.467 14:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:39.467 14:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.467 14:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.467 14:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.467 14:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.467 14:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.467 14:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.467 14:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.467 14:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.467 14:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.467 14:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.467 "name": "raid_bdev1", 00:12:39.467 "uuid": "64e1cab5-cce4-4624-8287-8ec70573f992", 00:12:39.467 "strip_size_kb": 0, 00:12:39.467 "state": "online", 00:12:39.467 "raid_level": "raid1", 00:12:39.467 "superblock": true, 00:12:39.467 "num_base_bdevs": 2, 00:12:39.467 "num_base_bdevs_discovered": 1, 00:12:39.467 "num_base_bdevs_operational": 1, 00:12:39.467 "base_bdevs_list": [ 00:12:39.467 { 00:12:39.467 "name": null, 00:12:39.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.467 "is_configured": false, 00:12:39.467 "data_offset": 0, 00:12:39.467 "data_size": 63488 00:12:39.467 }, 00:12:39.467 { 00:12:39.467 "name": "BaseBdev2", 00:12:39.467 "uuid": "ccb567d5-964f-51d2-bedf-95e072b0b8a0", 00:12:39.467 "is_configured": true, 00:12:39.467 "data_offset": 2048, 00:12:39.467 "data_size": 63488 00:12:39.467 } 00:12:39.467 ] 00:12:39.467 }' 00:12:39.467 14:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.467 14:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.726 14:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:39.726 14:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:39.726 14:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:39.726 14:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:39.726 14:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:39.726 14:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.726 14:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.726 14:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.726 14:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.726 14:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.985 14:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:39.985 "name": "raid_bdev1", 00:12:39.985 "uuid": "64e1cab5-cce4-4624-8287-8ec70573f992", 00:12:39.985 "strip_size_kb": 0, 00:12:39.985 "state": "online", 00:12:39.985 "raid_level": "raid1", 00:12:39.985 "superblock": true, 00:12:39.985 "num_base_bdevs": 2, 00:12:39.985 "num_base_bdevs_discovered": 1, 00:12:39.985 "num_base_bdevs_operational": 1, 00:12:39.985 "base_bdevs_list": [ 00:12:39.985 { 00:12:39.985 "name": null, 00:12:39.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.985 "is_configured": false, 00:12:39.985 "data_offset": 0, 00:12:39.985 "data_size": 63488 00:12:39.985 }, 00:12:39.985 { 00:12:39.985 "name": "BaseBdev2", 00:12:39.985 "uuid": "ccb567d5-964f-51d2-bedf-95e072b0b8a0", 00:12:39.985 "is_configured": true, 00:12:39.985 "data_offset": 2048, 00:12:39.986 "data_size": 63488 00:12:39.986 } 00:12:39.986 ] 00:12:39.986 }' 00:12:39.986 14:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:39.986 14:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:39.986 14:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:39.986 14:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:39.986 14:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:39.986 14:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.986 14:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.986 [2024-12-09 14:45:17.946561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:39.986 [2024-12-09 14:45:17.964444] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:12:39.986 14:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.986 14:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:39.986 [2024-12-09 14:45:17.966449] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:40.923 14:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:40.923 14:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:40.923 14:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:40.923 14:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:40.923 14:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:40.923 14:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.923 14:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.923 14:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.923 14:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.923 14:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.923 14:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.923 "name": "raid_bdev1", 00:12:40.923 "uuid": "64e1cab5-cce4-4624-8287-8ec70573f992", 00:12:40.923 "strip_size_kb": 0, 00:12:40.923 "state": "online", 00:12:40.923 "raid_level": "raid1", 00:12:40.923 "superblock": true, 00:12:40.923 "num_base_bdevs": 2, 00:12:40.923 "num_base_bdevs_discovered": 2, 00:12:40.923 "num_base_bdevs_operational": 2, 00:12:40.923 "process": { 00:12:40.923 "type": "rebuild", 00:12:40.923 "target": "spare", 00:12:40.923 "progress": { 00:12:40.923 "blocks": 20480, 00:12:40.923 "percent": 32 00:12:40.923 } 00:12:40.923 }, 00:12:40.923 "base_bdevs_list": [ 00:12:40.923 { 00:12:40.923 "name": "spare", 00:12:40.923 "uuid": "d162c4d8-e682-5b9f-866d-c362e9b8d274", 00:12:40.923 "is_configured": true, 00:12:40.923 "data_offset": 2048, 00:12:40.923 "data_size": 63488 00:12:40.923 }, 00:12:40.923 { 00:12:40.923 "name": "BaseBdev2", 00:12:40.923 "uuid": "ccb567d5-964f-51d2-bedf-95e072b0b8a0", 00:12:40.923 "is_configured": true, 00:12:40.923 "data_offset": 2048, 00:12:40.923 "data_size": 63488 00:12:40.923 } 00:12:40.923 ] 00:12:40.923 }' 00:12:40.923 14:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:41.183 14:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:41.183 14:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:41.183 14:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:41.183 14:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:41.183 14:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:41.183 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:41.183 14:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:41.183 14:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:41.183 14:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:41.183 14:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=391 00:12:41.183 14:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:41.184 14:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:41.184 14:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:41.184 14:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:41.184 14:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:41.184 14:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:41.184 14:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.184 14:45:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.184 14:45:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.184 14:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.184 14:45:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.184 14:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:41.184 "name": "raid_bdev1", 00:12:41.184 "uuid": "64e1cab5-cce4-4624-8287-8ec70573f992", 00:12:41.184 "strip_size_kb": 0, 00:12:41.184 "state": "online", 00:12:41.184 "raid_level": "raid1", 00:12:41.184 "superblock": true, 00:12:41.184 "num_base_bdevs": 2, 00:12:41.184 "num_base_bdevs_discovered": 2, 00:12:41.184 "num_base_bdevs_operational": 2, 00:12:41.184 "process": { 00:12:41.184 "type": "rebuild", 00:12:41.184 "target": "spare", 00:12:41.184 "progress": { 00:12:41.184 "blocks": 22528, 00:12:41.184 "percent": 35 00:12:41.184 } 00:12:41.184 }, 00:12:41.184 "base_bdevs_list": [ 00:12:41.184 { 00:12:41.184 "name": "spare", 00:12:41.184 "uuid": "d162c4d8-e682-5b9f-866d-c362e9b8d274", 00:12:41.184 "is_configured": true, 00:12:41.184 "data_offset": 2048, 00:12:41.184 "data_size": 63488 00:12:41.184 }, 00:12:41.184 { 00:12:41.184 "name": "BaseBdev2", 00:12:41.184 "uuid": "ccb567d5-964f-51d2-bedf-95e072b0b8a0", 00:12:41.184 "is_configured": true, 00:12:41.184 "data_offset": 2048, 00:12:41.184 "data_size": 63488 00:12:41.184 } 00:12:41.184 ] 00:12:41.184 }' 00:12:41.184 14:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:41.184 14:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:41.184 14:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:41.184 14:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:41.184 14:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:42.562 14:45:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:42.562 14:45:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:42.562 14:45:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:42.562 14:45:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:42.562 14:45:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:42.562 14:45:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:42.562 14:45:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.562 14:45:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.562 14:45:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.562 14:45:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.562 14:45:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.562 14:45:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:42.562 "name": "raid_bdev1", 00:12:42.562 "uuid": "64e1cab5-cce4-4624-8287-8ec70573f992", 00:12:42.562 "strip_size_kb": 0, 00:12:42.562 "state": "online", 00:12:42.562 "raid_level": "raid1", 00:12:42.562 "superblock": true, 00:12:42.562 "num_base_bdevs": 2, 00:12:42.562 "num_base_bdevs_discovered": 2, 00:12:42.562 "num_base_bdevs_operational": 2, 00:12:42.562 "process": { 00:12:42.562 "type": "rebuild", 00:12:42.562 "target": "spare", 00:12:42.562 "progress": { 00:12:42.562 "blocks": 45056, 00:12:42.562 "percent": 70 00:12:42.562 } 00:12:42.562 }, 00:12:42.562 "base_bdevs_list": [ 00:12:42.562 { 00:12:42.562 "name": "spare", 00:12:42.562 "uuid": "d162c4d8-e682-5b9f-866d-c362e9b8d274", 00:12:42.562 "is_configured": true, 00:12:42.562 "data_offset": 2048, 00:12:42.562 "data_size": 63488 00:12:42.562 }, 00:12:42.562 { 00:12:42.562 "name": "BaseBdev2", 00:12:42.562 "uuid": "ccb567d5-964f-51d2-bedf-95e072b0b8a0", 00:12:42.562 "is_configured": true, 00:12:42.562 "data_offset": 2048, 00:12:42.562 "data_size": 63488 00:12:42.562 } 00:12:42.562 ] 00:12:42.562 }' 00:12:42.562 14:45:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:42.562 14:45:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:42.562 14:45:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:42.562 14:45:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:42.562 14:45:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:43.129 [2024-12-09 14:45:21.081673] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:43.129 [2024-12-09 14:45:21.081880] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:43.129 [2024-12-09 14:45:21.082065] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.388 14:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:43.388 14:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:43.388 14:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:43.388 14:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:43.388 14:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:43.388 14:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:43.388 14:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.388 14:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.388 14:45:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.388 14:45:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.388 14:45:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.388 14:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:43.388 "name": "raid_bdev1", 00:12:43.388 "uuid": "64e1cab5-cce4-4624-8287-8ec70573f992", 00:12:43.388 "strip_size_kb": 0, 00:12:43.388 "state": "online", 00:12:43.388 "raid_level": "raid1", 00:12:43.388 "superblock": true, 00:12:43.388 "num_base_bdevs": 2, 00:12:43.388 "num_base_bdevs_discovered": 2, 00:12:43.388 "num_base_bdevs_operational": 2, 00:12:43.388 "base_bdevs_list": [ 00:12:43.388 { 00:12:43.388 "name": "spare", 00:12:43.388 "uuid": "d162c4d8-e682-5b9f-866d-c362e9b8d274", 00:12:43.388 "is_configured": true, 00:12:43.388 "data_offset": 2048, 00:12:43.388 "data_size": 63488 00:12:43.388 }, 00:12:43.388 { 00:12:43.388 "name": "BaseBdev2", 00:12:43.388 "uuid": "ccb567d5-964f-51d2-bedf-95e072b0b8a0", 00:12:43.388 "is_configured": true, 00:12:43.388 "data_offset": 2048, 00:12:43.388 "data_size": 63488 00:12:43.388 } 00:12:43.388 ] 00:12:43.388 }' 00:12:43.388 14:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:43.388 14:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:43.388 14:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:43.647 14:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:43.647 14:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:43.647 14:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:43.647 14:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:43.647 14:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:43.647 14:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:43.647 14:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:43.647 14:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.647 14:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.647 14:45:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.647 14:45:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.647 14:45:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.647 14:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:43.647 "name": "raid_bdev1", 00:12:43.647 "uuid": "64e1cab5-cce4-4624-8287-8ec70573f992", 00:12:43.647 "strip_size_kb": 0, 00:12:43.647 "state": "online", 00:12:43.647 "raid_level": "raid1", 00:12:43.647 "superblock": true, 00:12:43.647 "num_base_bdevs": 2, 00:12:43.647 "num_base_bdevs_discovered": 2, 00:12:43.647 "num_base_bdevs_operational": 2, 00:12:43.647 "base_bdevs_list": [ 00:12:43.647 { 00:12:43.647 "name": "spare", 00:12:43.647 "uuid": "d162c4d8-e682-5b9f-866d-c362e9b8d274", 00:12:43.647 "is_configured": true, 00:12:43.647 "data_offset": 2048, 00:12:43.647 "data_size": 63488 00:12:43.647 }, 00:12:43.647 { 00:12:43.647 "name": "BaseBdev2", 00:12:43.647 "uuid": "ccb567d5-964f-51d2-bedf-95e072b0b8a0", 00:12:43.647 "is_configured": true, 00:12:43.647 "data_offset": 2048, 00:12:43.647 "data_size": 63488 00:12:43.647 } 00:12:43.647 ] 00:12:43.647 }' 00:12:43.647 14:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:43.647 14:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:43.647 14:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:43.647 14:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:43.647 14:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:43.647 14:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.647 14:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.647 14:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.647 14:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.647 14:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:43.647 14:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.647 14:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.647 14:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.647 14:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.647 14:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.647 14:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.647 14:45:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.647 14:45:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.647 14:45:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.647 14:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.647 "name": "raid_bdev1", 00:12:43.647 "uuid": "64e1cab5-cce4-4624-8287-8ec70573f992", 00:12:43.647 "strip_size_kb": 0, 00:12:43.647 "state": "online", 00:12:43.647 "raid_level": "raid1", 00:12:43.647 "superblock": true, 00:12:43.647 "num_base_bdevs": 2, 00:12:43.647 "num_base_bdevs_discovered": 2, 00:12:43.647 "num_base_bdevs_operational": 2, 00:12:43.647 "base_bdevs_list": [ 00:12:43.647 { 00:12:43.647 "name": "spare", 00:12:43.647 "uuid": "d162c4d8-e682-5b9f-866d-c362e9b8d274", 00:12:43.647 "is_configured": true, 00:12:43.647 "data_offset": 2048, 00:12:43.647 "data_size": 63488 00:12:43.647 }, 00:12:43.647 { 00:12:43.647 "name": "BaseBdev2", 00:12:43.647 "uuid": "ccb567d5-964f-51d2-bedf-95e072b0b8a0", 00:12:43.647 "is_configured": true, 00:12:43.647 "data_offset": 2048, 00:12:43.647 "data_size": 63488 00:12:43.647 } 00:12:43.647 ] 00:12:43.647 }' 00:12:43.647 14:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.647 14:45:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.215 14:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:44.215 14:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.215 14:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.215 [2024-12-09 14:45:22.183567] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:44.215 [2024-12-09 14:45:22.183681] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:44.215 [2024-12-09 14:45:22.183798] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:44.215 [2024-12-09 14:45:22.183895] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:44.215 [2024-12-09 14:45:22.183951] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:44.215 14:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.215 14:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.215 14:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.215 14:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.215 14:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:44.215 14:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.215 14:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:44.215 14:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:44.215 14:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:44.215 14:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:44.215 14:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:44.215 14:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:44.215 14:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:44.215 14:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:44.215 14:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:44.215 14:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:44.215 14:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:44.215 14:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:44.215 14:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:44.480 /dev/nbd0 00:12:44.480 14:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:44.480 14:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:44.480 14:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:44.480 14:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:44.480 14:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:44.480 14:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:44.480 14:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:44.480 14:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:44.480 14:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:44.480 14:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:44.480 14:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:44.480 1+0 records in 00:12:44.480 1+0 records out 00:12:44.480 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000547994 s, 7.5 MB/s 00:12:44.480 14:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.480 14:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:44.480 14:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.480 14:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:44.480 14:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:44.480 14:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:44.480 14:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:44.480 14:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:44.745 /dev/nbd1 00:12:44.745 14:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:44.745 14:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:44.745 14:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:44.745 14:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:44.745 14:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:44.745 14:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:44.745 14:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:44.745 14:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:44.745 14:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:44.745 14:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:44.745 14:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:44.745 1+0 records in 00:12:44.745 1+0 records out 00:12:44.745 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281118 s, 14.6 MB/s 00:12:44.745 14:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.745 14:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:44.745 14:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.745 14:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:44.745 14:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:44.745 14:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:44.745 14:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:44.745 14:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:45.004 14:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:45.004 14:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:45.004 14:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:45.004 14:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:45.004 14:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:45.004 14:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:45.004 14:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:45.263 14:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:45.263 14:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:45.263 14:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:45.263 14:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:45.263 14:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:45.263 14:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:45.263 14:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:45.263 14:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:45.263 14:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:45.263 14:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:45.521 14:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:45.521 14:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:45.521 14:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:45.521 14:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:45.521 14:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:45.521 14:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:45.521 14:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:45.521 14:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:45.521 14:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:45.521 14:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:45.521 14:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.521 14:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.521 14:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.521 14:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:45.521 14:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.521 14:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.521 [2024-12-09 14:45:23.452895] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:45.521 [2024-12-09 14:45:23.453028] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.521 [2024-12-09 14:45:23.453065] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:45.521 [2024-12-09 14:45:23.453076] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.521 [2024-12-09 14:45:23.455759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.521 [2024-12-09 14:45:23.455798] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:45.521 [2024-12-09 14:45:23.455907] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:45.521 [2024-12-09 14:45:23.455969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:45.521 [2024-12-09 14:45:23.456161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:45.521 spare 00:12:45.521 14:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.521 14:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:45.521 14:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.521 14:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.521 [2024-12-09 14:45:23.556080] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:45.521 [2024-12-09 14:45:23.556149] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:45.521 [2024-12-09 14:45:23.556511] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:12:45.521 [2024-12-09 14:45:23.556760] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:45.521 [2024-12-09 14:45:23.556777] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:45.521 [2024-12-09 14:45:23.557000] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.521 14:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.521 14:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:45.521 14:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.521 14:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.521 14:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.521 14:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.521 14:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:45.522 14:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.522 14:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.522 14:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.522 14:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.522 14:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.522 14:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.522 14:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.522 14:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.522 14:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.522 14:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.522 "name": "raid_bdev1", 00:12:45.522 "uuid": "64e1cab5-cce4-4624-8287-8ec70573f992", 00:12:45.522 "strip_size_kb": 0, 00:12:45.522 "state": "online", 00:12:45.522 "raid_level": "raid1", 00:12:45.522 "superblock": true, 00:12:45.522 "num_base_bdevs": 2, 00:12:45.522 "num_base_bdevs_discovered": 2, 00:12:45.522 "num_base_bdevs_operational": 2, 00:12:45.522 "base_bdevs_list": [ 00:12:45.522 { 00:12:45.522 "name": "spare", 00:12:45.522 "uuid": "d162c4d8-e682-5b9f-866d-c362e9b8d274", 00:12:45.522 "is_configured": true, 00:12:45.522 "data_offset": 2048, 00:12:45.522 "data_size": 63488 00:12:45.522 }, 00:12:45.522 { 00:12:45.522 "name": "BaseBdev2", 00:12:45.522 "uuid": "ccb567d5-964f-51d2-bedf-95e072b0b8a0", 00:12:45.522 "is_configured": true, 00:12:45.522 "data_offset": 2048, 00:12:45.522 "data_size": 63488 00:12:45.522 } 00:12:45.522 ] 00:12:45.522 }' 00:12:45.522 14:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.522 14:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.092 14:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:46.092 14:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:46.092 14:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:46.092 14:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:46.092 14:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:46.092 14:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.092 14:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.092 14:45:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.092 14:45:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.092 14:45:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.092 14:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:46.092 "name": "raid_bdev1", 00:12:46.092 "uuid": "64e1cab5-cce4-4624-8287-8ec70573f992", 00:12:46.092 "strip_size_kb": 0, 00:12:46.092 "state": "online", 00:12:46.092 "raid_level": "raid1", 00:12:46.092 "superblock": true, 00:12:46.092 "num_base_bdevs": 2, 00:12:46.092 "num_base_bdevs_discovered": 2, 00:12:46.092 "num_base_bdevs_operational": 2, 00:12:46.092 "base_bdevs_list": [ 00:12:46.092 { 00:12:46.092 "name": "spare", 00:12:46.092 "uuid": "d162c4d8-e682-5b9f-866d-c362e9b8d274", 00:12:46.092 "is_configured": true, 00:12:46.092 "data_offset": 2048, 00:12:46.092 "data_size": 63488 00:12:46.092 }, 00:12:46.092 { 00:12:46.092 "name": "BaseBdev2", 00:12:46.092 "uuid": "ccb567d5-964f-51d2-bedf-95e072b0b8a0", 00:12:46.092 "is_configured": true, 00:12:46.092 "data_offset": 2048, 00:12:46.092 "data_size": 63488 00:12:46.092 } 00:12:46.092 ] 00:12:46.092 }' 00:12:46.092 14:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:46.092 14:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:46.092 14:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:46.351 14:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:46.351 14:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.351 14:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:46.351 14:45:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.351 14:45:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.351 14:45:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.351 14:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:46.351 14:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:46.351 14:45:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.351 14:45:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.351 [2024-12-09 14:45:24.279735] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:46.351 14:45:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.351 14:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:46.351 14:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:46.351 14:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.351 14:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.351 14:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.351 14:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:46.351 14:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.351 14:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.351 14:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.351 14:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.351 14:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.351 14:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.351 14:45:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.351 14:45:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.351 14:45:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.351 14:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.351 "name": "raid_bdev1", 00:12:46.351 "uuid": "64e1cab5-cce4-4624-8287-8ec70573f992", 00:12:46.351 "strip_size_kb": 0, 00:12:46.351 "state": "online", 00:12:46.351 "raid_level": "raid1", 00:12:46.351 "superblock": true, 00:12:46.351 "num_base_bdevs": 2, 00:12:46.351 "num_base_bdevs_discovered": 1, 00:12:46.351 "num_base_bdevs_operational": 1, 00:12:46.351 "base_bdevs_list": [ 00:12:46.351 { 00:12:46.351 "name": null, 00:12:46.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.351 "is_configured": false, 00:12:46.351 "data_offset": 0, 00:12:46.351 "data_size": 63488 00:12:46.351 }, 00:12:46.351 { 00:12:46.351 "name": "BaseBdev2", 00:12:46.351 "uuid": "ccb567d5-964f-51d2-bedf-95e072b0b8a0", 00:12:46.351 "is_configured": true, 00:12:46.351 "data_offset": 2048, 00:12:46.351 "data_size": 63488 00:12:46.351 } 00:12:46.351 ] 00:12:46.351 }' 00:12:46.351 14:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.351 14:45:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.920 14:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:46.920 14:45:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.920 14:45:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.920 [2024-12-09 14:45:24.751004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:46.920 [2024-12-09 14:45:24.751225] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:46.920 [2024-12-09 14:45:24.751246] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:46.920 [2024-12-09 14:45:24.751290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:46.920 [2024-12-09 14:45:24.768600] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:12:46.920 14:45:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.920 14:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:46.920 [2024-12-09 14:45:24.770640] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:47.858 14:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:47.858 14:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:47.858 14:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:47.858 14:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:47.858 14:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:47.858 14:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.858 14:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.858 14:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.858 14:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.858 14:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.858 14:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.858 "name": "raid_bdev1", 00:12:47.858 "uuid": "64e1cab5-cce4-4624-8287-8ec70573f992", 00:12:47.858 "strip_size_kb": 0, 00:12:47.858 "state": "online", 00:12:47.858 "raid_level": "raid1", 00:12:47.858 "superblock": true, 00:12:47.858 "num_base_bdevs": 2, 00:12:47.858 "num_base_bdevs_discovered": 2, 00:12:47.858 "num_base_bdevs_operational": 2, 00:12:47.858 "process": { 00:12:47.858 "type": "rebuild", 00:12:47.858 "target": "spare", 00:12:47.858 "progress": { 00:12:47.858 "blocks": 20480, 00:12:47.858 "percent": 32 00:12:47.858 } 00:12:47.858 }, 00:12:47.858 "base_bdevs_list": [ 00:12:47.858 { 00:12:47.858 "name": "spare", 00:12:47.858 "uuid": "d162c4d8-e682-5b9f-866d-c362e9b8d274", 00:12:47.858 "is_configured": true, 00:12:47.858 "data_offset": 2048, 00:12:47.858 "data_size": 63488 00:12:47.858 }, 00:12:47.858 { 00:12:47.858 "name": "BaseBdev2", 00:12:47.858 "uuid": "ccb567d5-964f-51d2-bedf-95e072b0b8a0", 00:12:47.858 "is_configured": true, 00:12:47.858 "data_offset": 2048, 00:12:47.858 "data_size": 63488 00:12:47.858 } 00:12:47.858 ] 00:12:47.858 }' 00:12:47.859 14:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.859 14:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:47.859 14:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.859 14:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:47.859 14:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:47.859 14:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.859 14:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.859 [2024-12-09 14:45:25.934914] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:47.859 [2024-12-09 14:45:25.976653] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:47.859 [2024-12-09 14:45:25.976827] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:47.859 [2024-12-09 14:45:25.976895] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:47.859 [2024-12-09 14:45:25.976945] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:48.118 14:45:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.118 14:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:48.118 14:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:48.118 14:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:48.118 14:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.118 14:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.118 14:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:48.118 14:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.118 14:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.118 14:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.118 14:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.118 14:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.118 14:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.118 14:45:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.118 14:45:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.118 14:45:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.118 14:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.118 "name": "raid_bdev1", 00:12:48.118 "uuid": "64e1cab5-cce4-4624-8287-8ec70573f992", 00:12:48.118 "strip_size_kb": 0, 00:12:48.118 "state": "online", 00:12:48.118 "raid_level": "raid1", 00:12:48.118 "superblock": true, 00:12:48.118 "num_base_bdevs": 2, 00:12:48.118 "num_base_bdevs_discovered": 1, 00:12:48.118 "num_base_bdevs_operational": 1, 00:12:48.118 "base_bdevs_list": [ 00:12:48.118 { 00:12:48.118 "name": null, 00:12:48.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.118 "is_configured": false, 00:12:48.118 "data_offset": 0, 00:12:48.118 "data_size": 63488 00:12:48.118 }, 00:12:48.118 { 00:12:48.118 "name": "BaseBdev2", 00:12:48.118 "uuid": "ccb567d5-964f-51d2-bedf-95e072b0b8a0", 00:12:48.118 "is_configured": true, 00:12:48.118 "data_offset": 2048, 00:12:48.118 "data_size": 63488 00:12:48.118 } 00:12:48.118 ] 00:12:48.118 }' 00:12:48.118 14:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.118 14:45:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.377 14:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:48.377 14:45:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.377 14:45:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.377 [2024-12-09 14:45:26.471523] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:48.377 [2024-12-09 14:45:26.471673] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:48.377 [2024-12-09 14:45:26.471706] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:48.377 [2024-12-09 14:45:26.471720] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:48.377 [2024-12-09 14:45:26.472281] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:48.377 [2024-12-09 14:45:26.472307] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:48.377 [2024-12-09 14:45:26.472423] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:48.377 [2024-12-09 14:45:26.472441] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:48.377 [2024-12-09 14:45:26.472454] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:48.377 [2024-12-09 14:45:26.472486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:48.377 [2024-12-09 14:45:26.491499] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:12:48.377 spare 00:12:48.377 14:45:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.377 14:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:48.377 [2024-12-09 14:45:26.493705] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:49.755 14:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:49.755 14:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.755 14:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:49.755 14:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:49.755 14:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.755 14:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.755 14:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.755 14:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.755 14:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.755 14:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.755 14:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.755 "name": "raid_bdev1", 00:12:49.755 "uuid": "64e1cab5-cce4-4624-8287-8ec70573f992", 00:12:49.755 "strip_size_kb": 0, 00:12:49.755 "state": "online", 00:12:49.755 "raid_level": "raid1", 00:12:49.755 "superblock": true, 00:12:49.755 "num_base_bdevs": 2, 00:12:49.755 "num_base_bdevs_discovered": 2, 00:12:49.755 "num_base_bdevs_operational": 2, 00:12:49.755 "process": { 00:12:49.755 "type": "rebuild", 00:12:49.755 "target": "spare", 00:12:49.755 "progress": { 00:12:49.755 "blocks": 20480, 00:12:49.755 "percent": 32 00:12:49.755 } 00:12:49.755 }, 00:12:49.755 "base_bdevs_list": [ 00:12:49.755 { 00:12:49.755 "name": "spare", 00:12:49.755 "uuid": "d162c4d8-e682-5b9f-866d-c362e9b8d274", 00:12:49.755 "is_configured": true, 00:12:49.755 "data_offset": 2048, 00:12:49.755 "data_size": 63488 00:12:49.755 }, 00:12:49.755 { 00:12:49.755 "name": "BaseBdev2", 00:12:49.755 "uuid": "ccb567d5-964f-51d2-bedf-95e072b0b8a0", 00:12:49.755 "is_configured": true, 00:12:49.755 "data_offset": 2048, 00:12:49.755 "data_size": 63488 00:12:49.755 } 00:12:49.755 ] 00:12:49.755 }' 00:12:49.755 14:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:49.755 14:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:49.755 14:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.755 14:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:49.755 14:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:49.755 14:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.755 14:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.755 [2024-12-09 14:45:27.644851] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:49.755 [2024-12-09 14:45:27.699817] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:49.755 [2024-12-09 14:45:27.699898] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:49.755 [2024-12-09 14:45:27.699920] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:49.755 [2024-12-09 14:45:27.699929] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:49.755 14:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.755 14:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:49.755 14:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:49.755 14:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.755 14:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.755 14:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.755 14:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:49.755 14:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.755 14:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.756 14:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.756 14:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.756 14:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.756 14:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.756 14:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.756 14:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.756 14:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.756 14:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.756 "name": "raid_bdev1", 00:12:49.756 "uuid": "64e1cab5-cce4-4624-8287-8ec70573f992", 00:12:49.756 "strip_size_kb": 0, 00:12:49.756 "state": "online", 00:12:49.756 "raid_level": "raid1", 00:12:49.756 "superblock": true, 00:12:49.756 "num_base_bdevs": 2, 00:12:49.756 "num_base_bdevs_discovered": 1, 00:12:49.756 "num_base_bdevs_operational": 1, 00:12:49.756 "base_bdevs_list": [ 00:12:49.756 { 00:12:49.756 "name": null, 00:12:49.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.756 "is_configured": false, 00:12:49.756 "data_offset": 0, 00:12:49.756 "data_size": 63488 00:12:49.756 }, 00:12:49.756 { 00:12:49.756 "name": "BaseBdev2", 00:12:49.756 "uuid": "ccb567d5-964f-51d2-bedf-95e072b0b8a0", 00:12:49.756 "is_configured": true, 00:12:49.756 "data_offset": 2048, 00:12:49.756 "data_size": 63488 00:12:49.756 } 00:12:49.756 ] 00:12:49.756 }' 00:12:49.756 14:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.756 14:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.382 14:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:50.382 14:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.382 14:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:50.382 14:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:50.382 14:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.382 14:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.382 14:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.382 14:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.382 14:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.382 14:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.382 14:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.382 "name": "raid_bdev1", 00:12:50.382 "uuid": "64e1cab5-cce4-4624-8287-8ec70573f992", 00:12:50.382 "strip_size_kb": 0, 00:12:50.382 "state": "online", 00:12:50.382 "raid_level": "raid1", 00:12:50.382 "superblock": true, 00:12:50.382 "num_base_bdevs": 2, 00:12:50.382 "num_base_bdevs_discovered": 1, 00:12:50.382 "num_base_bdevs_operational": 1, 00:12:50.382 "base_bdevs_list": [ 00:12:50.382 { 00:12:50.382 "name": null, 00:12:50.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.382 "is_configured": false, 00:12:50.382 "data_offset": 0, 00:12:50.382 "data_size": 63488 00:12:50.382 }, 00:12:50.382 { 00:12:50.382 "name": "BaseBdev2", 00:12:50.382 "uuid": "ccb567d5-964f-51d2-bedf-95e072b0b8a0", 00:12:50.382 "is_configured": true, 00:12:50.382 "data_offset": 2048, 00:12:50.382 "data_size": 63488 00:12:50.382 } 00:12:50.382 ] 00:12:50.382 }' 00:12:50.382 14:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.382 14:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:50.382 14:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.382 14:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:50.382 14:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:50.382 14:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.382 14:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.382 14:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.382 14:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:50.382 14:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.382 14:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.382 [2024-12-09 14:45:28.357888] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:50.382 [2024-12-09 14:45:28.357963] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:50.382 [2024-12-09 14:45:28.357998] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:50.382 [2024-12-09 14:45:28.358022] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:50.382 [2024-12-09 14:45:28.358553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:50.382 [2024-12-09 14:45:28.358595] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:50.382 [2024-12-09 14:45:28.358696] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:50.382 [2024-12-09 14:45:28.358712] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:50.382 [2024-12-09 14:45:28.358724] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:50.382 [2024-12-09 14:45:28.358736] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:50.382 BaseBdev1 00:12:50.382 14:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.382 14:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:51.333 14:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:51.333 14:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:51.333 14:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:51.333 14:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:51.333 14:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:51.333 14:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:51.333 14:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.333 14:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.333 14:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.333 14:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.333 14:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.333 14:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.333 14:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.333 14:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.333 14:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.333 14:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.333 "name": "raid_bdev1", 00:12:51.333 "uuid": "64e1cab5-cce4-4624-8287-8ec70573f992", 00:12:51.333 "strip_size_kb": 0, 00:12:51.333 "state": "online", 00:12:51.333 "raid_level": "raid1", 00:12:51.333 "superblock": true, 00:12:51.333 "num_base_bdevs": 2, 00:12:51.333 "num_base_bdevs_discovered": 1, 00:12:51.333 "num_base_bdevs_operational": 1, 00:12:51.333 "base_bdevs_list": [ 00:12:51.333 { 00:12:51.333 "name": null, 00:12:51.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.333 "is_configured": false, 00:12:51.333 "data_offset": 0, 00:12:51.333 "data_size": 63488 00:12:51.333 }, 00:12:51.333 { 00:12:51.333 "name": "BaseBdev2", 00:12:51.333 "uuid": "ccb567d5-964f-51d2-bedf-95e072b0b8a0", 00:12:51.333 "is_configured": true, 00:12:51.333 "data_offset": 2048, 00:12:51.333 "data_size": 63488 00:12:51.333 } 00:12:51.333 ] 00:12:51.333 }' 00:12:51.333 14:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.333 14:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.901 14:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:51.901 14:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:51.901 14:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:51.901 14:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:51.901 14:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:51.901 14:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.901 14:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.901 14:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.901 14:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.901 14:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.901 14:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:51.901 "name": "raid_bdev1", 00:12:51.901 "uuid": "64e1cab5-cce4-4624-8287-8ec70573f992", 00:12:51.901 "strip_size_kb": 0, 00:12:51.901 "state": "online", 00:12:51.901 "raid_level": "raid1", 00:12:51.901 "superblock": true, 00:12:51.901 "num_base_bdevs": 2, 00:12:51.901 "num_base_bdevs_discovered": 1, 00:12:51.901 "num_base_bdevs_operational": 1, 00:12:51.901 "base_bdevs_list": [ 00:12:51.901 { 00:12:51.901 "name": null, 00:12:51.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.901 "is_configured": false, 00:12:51.901 "data_offset": 0, 00:12:51.901 "data_size": 63488 00:12:51.901 }, 00:12:51.901 { 00:12:51.901 "name": "BaseBdev2", 00:12:51.901 "uuid": "ccb567d5-964f-51d2-bedf-95e072b0b8a0", 00:12:51.901 "is_configured": true, 00:12:51.901 "data_offset": 2048, 00:12:51.901 "data_size": 63488 00:12:51.901 } 00:12:51.901 ] 00:12:51.901 }' 00:12:51.901 14:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:51.901 14:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:51.901 14:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:51.901 14:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:51.902 14:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:51.902 14:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:12:51.902 14:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:51.902 14:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:51.902 14:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:51.902 14:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:51.902 14:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:51.902 14:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:51.902 14:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.902 14:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.902 [2024-12-09 14:45:30.015227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:51.902 [2024-12-09 14:45:30.015499] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:51.902 [2024-12-09 14:45:30.015604] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:51.902 request: 00:12:51.902 { 00:12:51.902 "base_bdev": "BaseBdev1", 00:12:51.902 "raid_bdev": "raid_bdev1", 00:12:52.161 "method": "bdev_raid_add_base_bdev", 00:12:52.161 "req_id": 1 00:12:52.161 } 00:12:52.161 Got JSON-RPC error response 00:12:52.161 response: 00:12:52.161 { 00:12:52.161 "code": -22, 00:12:52.161 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:52.161 } 00:12:52.161 14:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:52.161 14:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:12:52.161 14:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:52.161 14:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:52.161 14:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:52.161 14:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:53.096 14:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:53.096 14:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:53.096 14:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:53.096 14:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:53.096 14:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:53.096 14:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:53.096 14:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.097 14:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.097 14:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.097 14:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.097 14:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.097 14:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.097 14:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.097 14:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.097 14:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.097 14:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.097 "name": "raid_bdev1", 00:12:53.097 "uuid": "64e1cab5-cce4-4624-8287-8ec70573f992", 00:12:53.097 "strip_size_kb": 0, 00:12:53.097 "state": "online", 00:12:53.097 "raid_level": "raid1", 00:12:53.097 "superblock": true, 00:12:53.097 "num_base_bdevs": 2, 00:12:53.097 "num_base_bdevs_discovered": 1, 00:12:53.097 "num_base_bdevs_operational": 1, 00:12:53.097 "base_bdevs_list": [ 00:12:53.097 { 00:12:53.097 "name": null, 00:12:53.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.097 "is_configured": false, 00:12:53.097 "data_offset": 0, 00:12:53.097 "data_size": 63488 00:12:53.097 }, 00:12:53.097 { 00:12:53.097 "name": "BaseBdev2", 00:12:53.097 "uuid": "ccb567d5-964f-51d2-bedf-95e072b0b8a0", 00:12:53.097 "is_configured": true, 00:12:53.097 "data_offset": 2048, 00:12:53.097 "data_size": 63488 00:12:53.097 } 00:12:53.097 ] 00:12:53.097 }' 00:12:53.097 14:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.097 14:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.665 14:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:53.665 14:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:53.665 14:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:53.665 14:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:53.665 14:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:53.665 14:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.665 14:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.665 14:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.665 14:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.665 14:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.665 14:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:53.665 "name": "raid_bdev1", 00:12:53.665 "uuid": "64e1cab5-cce4-4624-8287-8ec70573f992", 00:12:53.665 "strip_size_kb": 0, 00:12:53.665 "state": "online", 00:12:53.665 "raid_level": "raid1", 00:12:53.665 "superblock": true, 00:12:53.665 "num_base_bdevs": 2, 00:12:53.665 "num_base_bdevs_discovered": 1, 00:12:53.665 "num_base_bdevs_operational": 1, 00:12:53.665 "base_bdevs_list": [ 00:12:53.665 { 00:12:53.665 "name": null, 00:12:53.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.665 "is_configured": false, 00:12:53.665 "data_offset": 0, 00:12:53.665 "data_size": 63488 00:12:53.665 }, 00:12:53.665 { 00:12:53.665 "name": "BaseBdev2", 00:12:53.665 "uuid": "ccb567d5-964f-51d2-bedf-95e072b0b8a0", 00:12:53.665 "is_configured": true, 00:12:53.665 "data_offset": 2048, 00:12:53.665 "data_size": 63488 00:12:53.665 } 00:12:53.665 ] 00:12:53.665 }' 00:12:53.665 14:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:53.665 14:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:53.665 14:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:53.665 14:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:53.665 14:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 77042 00:12:53.665 14:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 77042 ']' 00:12:53.665 14:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 77042 00:12:53.665 14:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:53.665 14:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:53.665 14:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77042 00:12:53.665 14:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:53.665 14:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:53.665 14:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77042' 00:12:53.665 killing process with pid 77042 00:12:53.665 14:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 77042 00:12:53.665 Received shutdown signal, test time was about 60.000000 seconds 00:12:53.665 00:12:53.665 Latency(us) 00:12:53.665 [2024-12-09T14:45:31.787Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.665 [2024-12-09T14:45:31.787Z] =================================================================================================================== 00:12:53.665 [2024-12-09T14:45:31.787Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:53.665 [2024-12-09 14:45:31.660632] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:53.665 14:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 77042 00:12:53.665 [2024-12-09 14:45:31.660816] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:53.665 [2024-12-09 14:45:31.660875] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:53.665 [2024-12-09 14:45:31.660888] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:12:53.924 [2024-12-09 14:45:32.023402] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:55.306 14:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:55.306 00:12:55.306 real 0m24.368s 00:12:55.306 user 0m29.989s 00:12:55.306 sys 0m3.847s 00:12:55.306 14:45:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:55.306 14:45:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.306 ************************************ 00:12:55.306 END TEST raid_rebuild_test_sb 00:12:55.306 ************************************ 00:12:55.306 14:45:33 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:12:55.306 14:45:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:55.306 14:45:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:55.306 14:45:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:55.306 ************************************ 00:12:55.306 START TEST raid_rebuild_test_io 00:12:55.306 ************************************ 00:12:55.306 14:45:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:12:55.306 14:45:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:55.306 14:45:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:55.306 14:45:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:55.306 14:45:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:55.306 14:45:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:55.565 14:45:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:55.565 14:45:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:55.565 14:45:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:55.565 14:45:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:55.565 14:45:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:55.565 14:45:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:55.565 14:45:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:55.565 14:45:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:55.565 14:45:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:55.565 14:45:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:55.565 14:45:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:55.565 14:45:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:55.565 14:45:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:55.565 14:45:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:55.566 14:45:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:55.566 14:45:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:55.566 14:45:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:55.566 14:45:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:55.566 14:45:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77783 00:12:55.566 14:45:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:55.566 14:45:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77783 00:12:55.566 14:45:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 77783 ']' 00:12:55.566 14:45:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.566 14:45:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:55.566 14:45:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.566 14:45:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:55.566 14:45:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.566 [2024-12-09 14:45:33.533596] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:12:55.566 [2024-12-09 14:45:33.533845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77783 ] 00:12:55.566 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:55.566 Zero copy mechanism will not be used. 00:12:55.824 [2024-12-09 14:45:33.712987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.824 [2024-12-09 14:45:33.846768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.085 [2024-12-09 14:45:34.086018] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:56.085 [2024-12-09 14:45:34.086170] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:56.347 14:45:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:56.347 14:45:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:12:56.347 14:45:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:56.347 14:45:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:56.347 14:45:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.347 14:45:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.607 BaseBdev1_malloc 00:12:56.607 14:45:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.607 14:45:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:56.607 14:45:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.607 14:45:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.607 [2024-12-09 14:45:34.485505] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:56.607 [2024-12-09 14:45:34.485597] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.607 [2024-12-09 14:45:34.485635] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:56.607 [2024-12-09 14:45:34.485651] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.607 [2024-12-09 14:45:34.488211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.607 [2024-12-09 14:45:34.488263] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:56.607 BaseBdev1 00:12:56.607 14:45:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.607 14:45:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:56.607 14:45:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:56.607 14:45:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.607 14:45:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.607 BaseBdev2_malloc 00:12:56.607 14:45:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.607 14:45:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:56.607 14:45:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.607 14:45:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.607 [2024-12-09 14:45:34.552254] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:56.607 [2024-12-09 14:45:34.552395] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.607 [2024-12-09 14:45:34.552451] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:56.607 [2024-12-09 14:45:34.552490] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.607 [2024-12-09 14:45:34.555126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.607 [2024-12-09 14:45:34.555228] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:56.607 BaseBdev2 00:12:56.607 14:45:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.607 14:45:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:56.607 14:45:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.607 14:45:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.607 spare_malloc 00:12:56.608 14:45:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.608 14:45:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:56.608 14:45:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.608 14:45:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.608 spare_delay 00:12:56.608 14:45:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.608 14:45:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:56.608 14:45:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.608 14:45:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.608 [2024-12-09 14:45:34.641951] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:56.608 [2024-12-09 14:45:34.642044] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.608 [2024-12-09 14:45:34.642071] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:56.608 [2024-12-09 14:45:34.642084] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.608 [2024-12-09 14:45:34.644667] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.608 [2024-12-09 14:45:34.644767] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:56.608 spare 00:12:56.608 14:45:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.608 14:45:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:56.608 14:45:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.608 14:45:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.608 [2024-12-09 14:45:34.653977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:56.608 [2024-12-09 14:45:34.656103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:56.608 [2024-12-09 14:45:34.656285] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:56.608 [2024-12-09 14:45:34.656306] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:56.608 [2024-12-09 14:45:34.656656] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:56.608 [2024-12-09 14:45:34.656873] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:56.608 [2024-12-09 14:45:34.656888] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:56.608 [2024-12-09 14:45:34.657111] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.608 14:45:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.608 14:45:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:56.608 14:45:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.608 14:45:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.608 14:45:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.608 14:45:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.608 14:45:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:56.608 14:45:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.608 14:45:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.608 14:45:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.608 14:45:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.608 14:45:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.608 14:45:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.608 14:45:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.608 14:45:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.608 14:45:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.608 14:45:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.608 "name": "raid_bdev1", 00:12:56.608 "uuid": "0d898c3a-e932-4c15-9f4e-865029279d6e", 00:12:56.608 "strip_size_kb": 0, 00:12:56.608 "state": "online", 00:12:56.608 "raid_level": "raid1", 00:12:56.608 "superblock": false, 00:12:56.608 "num_base_bdevs": 2, 00:12:56.608 "num_base_bdevs_discovered": 2, 00:12:56.608 "num_base_bdevs_operational": 2, 00:12:56.608 "base_bdevs_list": [ 00:12:56.608 { 00:12:56.608 "name": "BaseBdev1", 00:12:56.608 "uuid": "5658cee3-7a80-5f93-be0c-455538b4d14a", 00:12:56.608 "is_configured": true, 00:12:56.608 "data_offset": 0, 00:12:56.608 "data_size": 65536 00:12:56.608 }, 00:12:56.608 { 00:12:56.608 "name": "BaseBdev2", 00:12:56.608 "uuid": "a6f1d40b-4b5c-5dbb-abfb-e2fde19a9822", 00:12:56.608 "is_configured": true, 00:12:56.608 "data_offset": 0, 00:12:56.608 "data_size": 65536 00:12:56.608 } 00:12:56.608 ] 00:12:56.608 }' 00:12:56.608 14:45:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.608 14:45:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.174 14:45:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:57.174 14:45:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.174 14:45:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.174 14:45:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:57.174 [2024-12-09 14:45:35.133548] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:57.174 14:45:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.174 14:45:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:57.174 14:45:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.174 14:45:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.174 14:45:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.174 14:45:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:57.174 14:45:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.174 14:45:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:57.174 14:45:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:57.174 14:45:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:57.174 14:45:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:57.174 14:45:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.174 14:45:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.174 [2024-12-09 14:45:35.229031] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:57.174 14:45:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.174 14:45:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:57.174 14:45:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:57.174 14:45:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:57.174 14:45:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:57.174 14:45:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:57.174 14:45:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:57.174 14:45:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.174 14:45:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.174 14:45:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.174 14:45:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.174 14:45:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.174 14:45:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.174 14:45:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.174 14:45:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.174 14:45:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.174 14:45:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.174 "name": "raid_bdev1", 00:12:57.174 "uuid": "0d898c3a-e932-4c15-9f4e-865029279d6e", 00:12:57.174 "strip_size_kb": 0, 00:12:57.174 "state": "online", 00:12:57.174 "raid_level": "raid1", 00:12:57.174 "superblock": false, 00:12:57.174 "num_base_bdevs": 2, 00:12:57.174 "num_base_bdevs_discovered": 1, 00:12:57.174 "num_base_bdevs_operational": 1, 00:12:57.174 "base_bdevs_list": [ 00:12:57.174 { 00:12:57.174 "name": null, 00:12:57.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.174 "is_configured": false, 00:12:57.174 "data_offset": 0, 00:12:57.174 "data_size": 65536 00:12:57.174 }, 00:12:57.174 { 00:12:57.174 "name": "BaseBdev2", 00:12:57.174 "uuid": "a6f1d40b-4b5c-5dbb-abfb-e2fde19a9822", 00:12:57.174 "is_configured": true, 00:12:57.174 "data_offset": 0, 00:12:57.174 "data_size": 65536 00:12:57.174 } 00:12:57.174 ] 00:12:57.174 }' 00:12:57.174 14:45:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.174 14:45:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.433 [2024-12-09 14:45:35.342355] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:57.433 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:57.433 Zero copy mechanism will not be used. 00:12:57.433 Running I/O for 60 seconds... 00:12:57.692 14:45:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:57.692 14:45:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.692 14:45:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.692 [2024-12-09 14:45:35.658361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:57.693 14:45:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.693 14:45:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:57.693 [2024-12-09 14:45:35.719809] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:57.693 [2024-12-09 14:45:35.721892] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:57.952 [2024-12-09 14:45:35.849391] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:57.952 [2024-12-09 14:45:35.850124] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:57.952 [2024-12-09 14:45:36.060245] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:57.952 [2024-12-09 14:45:36.060727] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:58.521 137.00 IOPS, 411.00 MiB/s [2024-12-09T14:45:36.643Z] [2024-12-09 14:45:36.380373] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:58.780 14:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:58.780 [2024-12-09 14:45:36.717932] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 1 14:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:58.780 2288 offset_end: 18432 00:12:58.780 14:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:58.780 14:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:58.780 14:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:58.780 14:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.780 14:45:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.780 14:45:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.780 14:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.780 14:45:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.780 14:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:58.780 "name": "raid_bdev1", 00:12:58.780 "uuid": "0d898c3a-e932-4c15-9f4e-865029279d6e", 00:12:58.780 "strip_size_kb": 0, 00:12:58.780 "state": "online", 00:12:58.780 "raid_level": "raid1", 00:12:58.780 "superblock": false, 00:12:58.780 "num_base_bdevs": 2, 00:12:58.780 "num_base_bdevs_discovered": 2, 00:12:58.780 "num_base_bdevs_operational": 2, 00:12:58.780 "process": { 00:12:58.780 "type": "rebuild", 00:12:58.780 "target": "spare", 00:12:58.780 "progress": { 00:12:58.780 "blocks": 14336, 00:12:58.780 "percent": 21 00:12:58.780 } 00:12:58.780 }, 00:12:58.780 "base_bdevs_list": [ 00:12:58.780 { 00:12:58.780 "name": "spare", 00:12:58.780 "uuid": "a9755261-ddfd-5189-98c5-cff9103349ae", 00:12:58.780 "is_configured": true, 00:12:58.780 "data_offset": 0, 00:12:58.780 "data_size": 65536 00:12:58.780 }, 00:12:58.780 { 00:12:58.780 "name": "BaseBdev2", 00:12:58.780 "uuid": "a6f1d40b-4b5c-5dbb-abfb-e2fde19a9822", 00:12:58.780 "is_configured": true, 00:12:58.780 "data_offset": 0, 00:12:58.780 "data_size": 65536 00:12:58.780 } 00:12:58.780 ] 00:12:58.780 }' 00:12:58.780 14:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:58.780 14:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:58.780 14:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:58.780 [2024-12-09 14:45:36.819625] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:58.780 [2024-12-09 14:45:36.820075] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:58.780 14:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:58.780 14:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:58.780 14:45:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.780 14:45:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.780 [2024-12-09 14:45:36.830677] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:59.039 [2024-12-09 14:45:36.951400] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:59.039 [2024-12-09 14:45:36.954108] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.039 [2024-12-09 14:45:36.954148] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:59.039 [2024-12-09 14:45:36.954162] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:59.039 [2024-12-09 14:45:36.997556] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:59.039 14:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.039 14:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:59.039 14:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.039 14:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.039 14:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.039 14:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.039 14:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:59.039 14:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.039 14:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.039 14:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.039 14:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.039 14:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.039 14:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.039 14:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.039 14:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.039 14:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.039 14:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.039 "name": "raid_bdev1", 00:12:59.039 "uuid": "0d898c3a-e932-4c15-9f4e-865029279d6e", 00:12:59.039 "strip_size_kb": 0, 00:12:59.039 "state": "online", 00:12:59.039 "raid_level": "raid1", 00:12:59.039 "superblock": false, 00:12:59.039 "num_base_bdevs": 2, 00:12:59.039 "num_base_bdevs_discovered": 1, 00:12:59.039 "num_base_bdevs_operational": 1, 00:12:59.039 "base_bdevs_list": [ 00:12:59.039 { 00:12:59.039 "name": null, 00:12:59.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.039 "is_configured": false, 00:12:59.039 "data_offset": 0, 00:12:59.039 "data_size": 65536 00:12:59.039 }, 00:12:59.039 { 00:12:59.039 "name": "BaseBdev2", 00:12:59.039 "uuid": "a6f1d40b-4b5c-5dbb-abfb-e2fde19a9822", 00:12:59.039 "is_configured": true, 00:12:59.039 "data_offset": 0, 00:12:59.039 "data_size": 65536 00:12:59.039 } 00:12:59.039 ] 00:12:59.039 }' 00:12:59.039 14:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.039 14:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.558 135.00 IOPS, 405.00 MiB/s [2024-12-09T14:45:37.680Z] 14:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:59.558 14:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:59.558 14:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:59.558 14:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:59.558 14:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:59.558 14:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.558 14:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.558 14:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.558 14:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.558 14:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.558 14:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:59.558 "name": "raid_bdev1", 00:12:59.558 "uuid": "0d898c3a-e932-4c15-9f4e-865029279d6e", 00:12:59.558 "strip_size_kb": 0, 00:12:59.558 "state": "online", 00:12:59.558 "raid_level": "raid1", 00:12:59.558 "superblock": false, 00:12:59.558 "num_base_bdevs": 2, 00:12:59.558 "num_base_bdevs_discovered": 1, 00:12:59.558 "num_base_bdevs_operational": 1, 00:12:59.558 "base_bdevs_list": [ 00:12:59.558 { 00:12:59.558 "name": null, 00:12:59.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.558 "is_configured": false, 00:12:59.558 "data_offset": 0, 00:12:59.558 "data_size": 65536 00:12:59.558 }, 00:12:59.558 { 00:12:59.558 "name": "BaseBdev2", 00:12:59.558 "uuid": "a6f1d40b-4b5c-5dbb-abfb-e2fde19a9822", 00:12:59.558 "is_configured": true, 00:12:59.558 "data_offset": 0, 00:12:59.558 "data_size": 65536 00:12:59.558 } 00:12:59.558 ] 00:12:59.558 }' 00:12:59.558 14:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:59.558 14:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:59.558 14:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:59.558 14:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:59.558 14:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:59.558 14:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.558 14:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.558 [2024-12-09 14:45:37.585356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:59.558 14:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.558 14:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:59.558 [2024-12-09 14:45:37.661162] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:59.558 [2024-12-09 14:45:37.663204] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:59.819 [2024-12-09 14:45:37.776964] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:59.819 [2024-12-09 14:45:37.777670] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:00.078 [2024-12-09 14:45:37.993913] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:00.078 [2024-12-09 14:45:37.994282] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:00.337 [2024-12-09 14:45:38.320975] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:00.337 128.33 IOPS, 385.00 MiB/s [2024-12-09T14:45:38.459Z] [2024-12-09 14:45:38.435649] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:00.337 [2024-12-09 14:45:38.435892] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:00.596 14:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:00.596 14:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:00.596 14:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:00.596 14:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:00.596 14:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:00.596 14:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.596 14:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.596 14:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.596 14:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.596 14:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.596 14:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:00.596 "name": "raid_bdev1", 00:13:00.596 "uuid": "0d898c3a-e932-4c15-9f4e-865029279d6e", 00:13:00.596 "strip_size_kb": 0, 00:13:00.596 "state": "online", 00:13:00.597 "raid_level": "raid1", 00:13:00.597 "superblock": false, 00:13:00.597 "num_base_bdevs": 2, 00:13:00.597 "num_base_bdevs_discovered": 2, 00:13:00.597 "num_base_bdevs_operational": 2, 00:13:00.597 "process": { 00:13:00.597 "type": "rebuild", 00:13:00.597 "target": "spare", 00:13:00.597 "progress": { 00:13:00.597 "blocks": 12288, 00:13:00.597 "percent": 18 00:13:00.597 } 00:13:00.597 }, 00:13:00.597 "base_bdevs_list": [ 00:13:00.597 { 00:13:00.597 "name": "spare", 00:13:00.597 "uuid": "a9755261-ddfd-5189-98c5-cff9103349ae", 00:13:00.597 "is_configured": true, 00:13:00.597 "data_offset": 0, 00:13:00.597 "data_size": 65536 00:13:00.597 }, 00:13:00.597 { 00:13:00.597 "name": "BaseBdev2", 00:13:00.597 "uuid": "a6f1d40b-4b5c-5dbb-abfb-e2fde19a9822", 00:13:00.597 "is_configured": true, 00:13:00.597 "data_offset": 0, 00:13:00.597 "data_size": 65536 00:13:00.597 } 00:13:00.597 ] 00:13:00.597 }' 00:13:00.597 14:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:00.597 14:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:00.597 14:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:00.857 14:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:00.857 14:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:00.857 14:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:00.857 14:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:00.857 14:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:00.857 14:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=410 00:13:00.857 14:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:00.857 14:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:00.857 14:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:00.857 14:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:00.857 14:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:00.857 14:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:00.857 14:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.857 14:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.857 14:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.857 14:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.857 14:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.857 14:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:00.857 "name": "raid_bdev1", 00:13:00.857 "uuid": "0d898c3a-e932-4c15-9f4e-865029279d6e", 00:13:00.857 "strip_size_kb": 0, 00:13:00.857 "state": "online", 00:13:00.857 "raid_level": "raid1", 00:13:00.857 "superblock": false, 00:13:00.857 "num_base_bdevs": 2, 00:13:00.857 "num_base_bdevs_discovered": 2, 00:13:00.857 "num_base_bdevs_operational": 2, 00:13:00.857 "process": { 00:13:00.857 "type": "rebuild", 00:13:00.857 "target": "spare", 00:13:00.857 "progress": { 00:13:00.857 "blocks": 14336, 00:13:00.857 "percent": 21 00:13:00.857 } 00:13:00.857 }, 00:13:00.857 "base_bdevs_list": [ 00:13:00.857 { 00:13:00.857 "name": "spare", 00:13:00.857 "uuid": "a9755261-ddfd-5189-98c5-cff9103349ae", 00:13:00.857 "is_configured": true, 00:13:00.857 "data_offset": 0, 00:13:00.857 "data_size": 65536 00:13:00.857 }, 00:13:00.857 { 00:13:00.857 "name": "BaseBdev2", 00:13:00.857 "uuid": "a6f1d40b-4b5c-5dbb-abfb-e2fde19a9822", 00:13:00.857 "is_configured": true, 00:13:00.857 "data_offset": 0, 00:13:00.857 "data_size": 65536 00:13:00.857 } 00:13:00.857 ] 00:13:00.857 }' 00:13:00.857 14:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:00.857 14:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:00.857 14:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:00.857 14:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:00.857 14:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:01.117 [2024-12-09 14:45:39.013148] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:01.117 [2024-12-09 14:45:39.128267] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:01.948 113.75 IOPS, 341.25 MiB/s [2024-12-09T14:45:40.070Z] 14:45:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:01.948 14:45:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:01.948 14:45:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.948 14:45:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:01.948 14:45:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:01.948 14:45:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.948 14:45:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.948 14:45:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.948 14:45:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.948 14:45:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.948 14:45:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.948 14:45:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.948 "name": "raid_bdev1", 00:13:01.948 "uuid": "0d898c3a-e932-4c15-9f4e-865029279d6e", 00:13:01.948 "strip_size_kb": 0, 00:13:01.948 "state": "online", 00:13:01.948 "raid_level": "raid1", 00:13:01.948 "superblock": false, 00:13:01.948 "num_base_bdevs": 2, 00:13:01.948 "num_base_bdevs_discovered": 2, 00:13:01.948 "num_base_bdevs_operational": 2, 00:13:01.948 "process": { 00:13:01.948 "type": "rebuild", 00:13:01.948 "target": "spare", 00:13:01.948 "progress": { 00:13:01.948 "blocks": 34816, 00:13:01.948 "percent": 53 00:13:01.948 } 00:13:01.948 }, 00:13:01.948 "base_bdevs_list": [ 00:13:01.948 { 00:13:01.948 "name": "spare", 00:13:01.948 "uuid": "a9755261-ddfd-5189-98c5-cff9103349ae", 00:13:01.948 "is_configured": true, 00:13:01.948 "data_offset": 0, 00:13:01.948 "data_size": 65536 00:13:01.948 }, 00:13:01.948 { 00:13:01.948 "name": "BaseBdev2", 00:13:01.948 "uuid": "a6f1d40b-4b5c-5dbb-abfb-e2fde19a9822", 00:13:01.948 "is_configured": true, 00:13:01.948 "data_offset": 0, 00:13:01.948 "data_size": 65536 00:13:01.948 } 00:13:01.948 ] 00:13:01.948 }' 00:13:01.948 14:45:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.948 14:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:01.948 14:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.948 [2024-12-09 14:45:40.049561] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:02.211 14:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:02.211 14:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:02.211 [2024-12-09 14:45:40.157695] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:02.470 101.20 IOPS, 303.60 MiB/s [2024-12-09T14:45:40.592Z] [2024-12-09 14:45:40.521536] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:03.036 14:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:03.036 14:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:03.036 14:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:03.036 14:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:03.036 14:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:03.036 14:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:03.036 14:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.036 14:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.036 14:45:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.036 14:45:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.036 14:45:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.036 14:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.036 "name": "raid_bdev1", 00:13:03.036 "uuid": "0d898c3a-e932-4c15-9f4e-865029279d6e", 00:13:03.036 "strip_size_kb": 0, 00:13:03.036 "state": "online", 00:13:03.036 "raid_level": "raid1", 00:13:03.037 "superblock": false, 00:13:03.037 "num_base_bdevs": 2, 00:13:03.037 "num_base_bdevs_discovered": 2, 00:13:03.037 "num_base_bdevs_operational": 2, 00:13:03.037 "process": { 00:13:03.037 "type": "rebuild", 00:13:03.037 "target": "spare", 00:13:03.037 "progress": { 00:13:03.037 "blocks": 57344, 00:13:03.037 "percent": 87 00:13:03.037 } 00:13:03.037 }, 00:13:03.037 "base_bdevs_list": [ 00:13:03.037 { 00:13:03.037 "name": "spare", 00:13:03.037 "uuid": "a9755261-ddfd-5189-98c5-cff9103349ae", 00:13:03.037 "is_configured": true, 00:13:03.037 "data_offset": 0, 00:13:03.037 "data_size": 65536 00:13:03.037 }, 00:13:03.037 { 00:13:03.037 "name": "BaseBdev2", 00:13:03.037 "uuid": "a6f1d40b-4b5c-5dbb-abfb-e2fde19a9822", 00:13:03.037 "is_configured": true, 00:13:03.037 "data_offset": 0, 00:13:03.037 "data_size": 65536 00:13:03.037 } 00:13:03.037 ] 00:13:03.037 }' 00:13:03.037 14:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:03.295 14:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:03.295 14:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:03.295 14:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:03.295 14:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:03.555 92.50 IOPS, 277.50 MiB/s [2024-12-09T14:45:41.677Z] [2024-12-09 14:45:41.524280] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:03.555 [2024-12-09 14:45:41.630059] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:03.555 [2024-12-09 14:45:41.633445] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.492 "name": "raid_bdev1", 00:13:04.492 "uuid": "0d898c3a-e932-4c15-9f4e-865029279d6e", 00:13:04.492 "strip_size_kb": 0, 00:13:04.492 "state": "online", 00:13:04.492 "raid_level": "raid1", 00:13:04.492 "superblock": false, 00:13:04.492 "num_base_bdevs": 2, 00:13:04.492 "num_base_bdevs_discovered": 2, 00:13:04.492 "num_base_bdevs_operational": 2, 00:13:04.492 "base_bdevs_list": [ 00:13:04.492 { 00:13:04.492 "name": "spare", 00:13:04.492 "uuid": "a9755261-ddfd-5189-98c5-cff9103349ae", 00:13:04.492 "is_configured": true, 00:13:04.492 "data_offset": 0, 00:13:04.492 "data_size": 65536 00:13:04.492 }, 00:13:04.492 { 00:13:04.492 "name": "BaseBdev2", 00:13:04.492 "uuid": "a6f1d40b-4b5c-5dbb-abfb-e2fde19a9822", 00:13:04.492 "is_configured": true, 00:13:04.492 "data_offset": 0, 00:13:04.492 "data_size": 65536 00:13:04.492 } 00:13:04.492 ] 00:13:04.492 }' 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.492 83.29 IOPS, 249.86 MiB/s [2024-12-09T14:45:42.614Z] 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.492 "name": "raid_bdev1", 00:13:04.492 "uuid": "0d898c3a-e932-4c15-9f4e-865029279d6e", 00:13:04.492 "strip_size_kb": 0, 00:13:04.492 "state": "online", 00:13:04.492 "raid_level": "raid1", 00:13:04.492 "superblock": false, 00:13:04.492 "num_base_bdevs": 2, 00:13:04.492 "num_base_bdevs_discovered": 2, 00:13:04.492 "num_base_bdevs_operational": 2, 00:13:04.492 "base_bdevs_list": [ 00:13:04.492 { 00:13:04.492 "name": "spare", 00:13:04.492 "uuid": "a9755261-ddfd-5189-98c5-cff9103349ae", 00:13:04.492 "is_configured": true, 00:13:04.492 "data_offset": 0, 00:13:04.492 "data_size": 65536 00:13:04.492 }, 00:13:04.492 { 00:13:04.492 "name": "BaseBdev2", 00:13:04.492 "uuid": "a6f1d40b-4b5c-5dbb-abfb-e2fde19a9822", 00:13:04.492 "is_configured": true, 00:13:04.492 "data_offset": 0, 00:13:04.492 "data_size": 65536 00:13:04.492 } 00:13:04.492 ] 00:13:04.492 }' 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.492 "name": "raid_bdev1", 00:13:04.492 "uuid": "0d898c3a-e932-4c15-9f4e-865029279d6e", 00:13:04.492 "strip_size_kb": 0, 00:13:04.492 "state": "online", 00:13:04.492 "raid_level": "raid1", 00:13:04.492 "superblock": false, 00:13:04.492 "num_base_bdevs": 2, 00:13:04.492 "num_base_bdevs_discovered": 2, 00:13:04.492 "num_base_bdevs_operational": 2, 00:13:04.492 "base_bdevs_list": [ 00:13:04.492 { 00:13:04.492 "name": "spare", 00:13:04.492 "uuid": "a9755261-ddfd-5189-98c5-cff9103349ae", 00:13:04.492 "is_configured": true, 00:13:04.492 "data_offset": 0, 00:13:04.492 "data_size": 65536 00:13:04.492 }, 00:13:04.492 { 00:13:04.492 "name": "BaseBdev2", 00:13:04.492 "uuid": "a6f1d40b-4b5c-5dbb-abfb-e2fde19a9822", 00:13:04.492 "is_configured": true, 00:13:04.492 "data_offset": 0, 00:13:04.492 "data_size": 65536 00:13:04.492 } 00:13:04.492 ] 00:13:04.492 }' 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.492 14:45:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.059 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:05.059 14:45:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.059 14:45:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.059 [2024-12-09 14:45:42.928321] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:05.059 [2024-12-09 14:45:42.928358] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:05.059 00:13:05.059 Latency(us) 00:13:05.059 [2024-12-09T14:45:43.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:05.059 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:05.059 raid_bdev1 : 7.62 79.42 238.25 0.00 0.00 18305.67 327.32 109894.43 00:13:05.059 [2024-12-09T14:45:43.181Z] =================================================================================================================== 00:13:05.059 [2024-12-09T14:45:43.181Z] Total : 79.42 238.25 0.00 0.00 18305.67 327.32 109894.43 00:13:05.059 [2024-12-09 14:45:42.974106] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:05.059 [2024-12-09 14:45:42.974184] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:05.059 [2024-12-09 14:45:42.974273] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:05.059 [2024-12-09 14:45:42.974285] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:05.059 { 00:13:05.059 "results": [ 00:13:05.059 { 00:13:05.059 "job": "raid_bdev1", 00:13:05.059 "core_mask": "0x1", 00:13:05.059 "workload": "randrw", 00:13:05.059 "percentage": 50, 00:13:05.059 "status": "finished", 00:13:05.059 "queue_depth": 2, 00:13:05.059 "io_size": 3145728, 00:13:05.059 "runtime": 7.618006, 00:13:05.059 "iops": 79.41710731128329, 00:13:05.059 "mibps": 238.25132193384985, 00:13:05.059 "io_failed": 0, 00:13:05.059 "io_timeout": 0, 00:13:05.059 "avg_latency_us": 18305.671312569924, 00:13:05.059 "min_latency_us": 327.32227074235806, 00:13:05.059 "max_latency_us": 109894.42794759825 00:13:05.059 } 00:13:05.059 ], 00:13:05.059 "core_count": 1 00:13:05.059 } 00:13:05.059 14:45:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.059 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.059 14:45:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.059 14:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:05.059 14:45:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.059 14:45:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.059 14:45:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:05.059 14:45:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:05.059 14:45:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:05.059 14:45:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:05.059 14:45:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:05.059 14:45:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:05.059 14:45:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:05.059 14:45:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:05.059 14:45:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:05.059 14:45:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:05.059 14:45:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:05.059 14:45:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:05.059 14:45:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:05.319 /dev/nbd0 00:13:05.319 14:45:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:05.319 14:45:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:05.319 14:45:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:05.319 14:45:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:05.319 14:45:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:05.319 14:45:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:05.319 14:45:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:05.319 14:45:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:05.319 14:45:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:05.319 14:45:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:05.319 14:45:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.319 1+0 records in 00:13:05.319 1+0 records out 00:13:05.319 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325395 s, 12.6 MB/s 00:13:05.319 14:45:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.319 14:45:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:05.319 14:45:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.319 14:45:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:05.319 14:45:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:05.319 14:45:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:05.319 14:45:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:05.319 14:45:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:05.319 14:45:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:05.319 14:45:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:05.319 14:45:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:05.319 14:45:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:05.319 14:45:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:05.319 14:45:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:05.319 14:45:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:05.319 14:45:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:05.319 14:45:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:05.319 14:45:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:05.319 14:45:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:05.578 /dev/nbd1 00:13:05.578 14:45:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:05.578 14:45:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:05.578 14:45:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:05.578 14:45:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:05.578 14:45:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:05.578 14:45:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:05.578 14:45:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:05.578 14:45:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:05.578 14:45:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:05.578 14:45:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:05.578 14:45:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.578 1+0 records in 00:13:05.578 1+0 records out 00:13:05.578 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000565835 s, 7.2 MB/s 00:13:05.578 14:45:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.578 14:45:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:05.578 14:45:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.578 14:45:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:05.578 14:45:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:05.578 14:45:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:05.578 14:45:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:05.578 14:45:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:05.836 14:45:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:05.836 14:45:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:05.836 14:45:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:05.836 14:45:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:05.836 14:45:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:05.836 14:45:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:05.836 14:45:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:06.094 14:45:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:06.094 14:45:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:06.094 14:45:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:06.094 14:45:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.094 14:45:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.094 14:45:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:06.094 14:45:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:06.094 14:45:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.094 14:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:06.094 14:45:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:06.094 14:45:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:06.094 14:45:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:06.094 14:45:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:06.094 14:45:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.094 14:45:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:06.353 14:45:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:06.353 14:45:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:06.353 14:45:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:06.353 14:45:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.353 14:45:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.353 14:45:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:06.353 14:45:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:06.353 14:45:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.353 14:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:06.353 14:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 77783 00:13:06.353 14:45:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 77783 ']' 00:13:06.353 14:45:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 77783 00:13:06.353 14:45:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:13:06.353 14:45:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:06.353 14:45:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77783 00:13:06.353 killing process with pid 77783 00:13:06.353 Received shutdown signal, test time was about 9.045818 seconds 00:13:06.353 00:13:06.353 Latency(us) 00:13:06.353 [2024-12-09T14:45:44.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:06.353 [2024-12-09T14:45:44.475Z] =================================================================================================================== 00:13:06.353 [2024-12-09T14:45:44.475Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:06.353 14:45:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:06.353 14:45:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:06.353 14:45:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77783' 00:13:06.354 14:45:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 77783 00:13:06.354 [2024-12-09 14:45:44.373241] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:06.354 14:45:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 77783 00:13:06.613 [2024-12-09 14:45:44.646294] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:07.992 14:45:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:07.992 00:13:07.992 real 0m12.621s 00:13:07.992 user 0m15.900s 00:13:07.992 sys 0m1.483s 00:13:07.992 14:45:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:07.992 14:45:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.992 ************************************ 00:13:07.992 END TEST raid_rebuild_test_io 00:13:07.992 ************************************ 00:13:07.992 14:45:46 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:13:07.992 14:45:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:07.992 14:45:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:07.992 14:45:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:07.992 ************************************ 00:13:07.992 START TEST raid_rebuild_test_sb_io 00:13:07.992 ************************************ 00:13:08.251 14:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:13:08.251 14:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:08.251 14:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:08.251 14:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:08.251 14:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:08.251 14:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:08.251 14:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:08.251 14:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:08.251 14:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:08.251 14:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:08.251 14:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:08.251 14:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:08.251 14:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:08.251 14:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:08.251 14:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:08.251 14:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:08.251 14:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:08.251 14:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:08.251 14:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:08.251 14:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:08.251 14:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:08.251 14:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:08.251 14:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:08.251 14:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:08.251 14:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:08.251 14:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78159 00:13:08.251 14:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:08.251 14:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78159 00:13:08.251 14:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 78159 ']' 00:13:08.251 14:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.252 14:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:08.252 14:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.252 14:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:08.252 14:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.252 [2024-12-09 14:45:46.217727] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:13:08.252 [2024-12-09 14:45:46.217961] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:08.252 Zero copy mechanism will not be used. 00:13:08.252 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78159 ] 00:13:08.511 [2024-12-09 14:45:46.396632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.511 [2024-12-09 14:45:46.530475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.770 [2024-12-09 14:45:46.764817] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:08.770 [2024-12-09 14:45:46.764992] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:09.040 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:09.040 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:13:09.040 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:09.040 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:09.040 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.040 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.315 BaseBdev1_malloc 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.315 [2024-12-09 14:45:47.175566] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:09.315 [2024-12-09 14:45:47.175656] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.315 [2024-12-09 14:45:47.175684] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:09.315 [2024-12-09 14:45:47.175698] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.315 [2024-12-09 14:45:47.178207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.315 [2024-12-09 14:45:47.178256] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:09.315 BaseBdev1 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.315 BaseBdev2_malloc 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.315 [2024-12-09 14:45:47.237547] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:09.315 [2024-12-09 14:45:47.237632] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.315 [2024-12-09 14:45:47.237659] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:09.315 [2024-12-09 14:45:47.237671] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.315 [2024-12-09 14:45:47.240047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.315 [2024-12-09 14:45:47.240186] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:09.315 BaseBdev2 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.315 spare_malloc 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.315 spare_delay 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.315 [2024-12-09 14:45:47.317840] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:09.315 [2024-12-09 14:45:47.317911] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.315 [2024-12-09 14:45:47.317937] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:09.315 [2024-12-09 14:45:47.317951] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.315 [2024-12-09 14:45:47.320459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.315 [2024-12-09 14:45:47.320506] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:09.315 spare 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.315 [2024-12-09 14:45:47.329890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:09.315 [2024-12-09 14:45:47.331951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:09.315 [2024-12-09 14:45:47.332164] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:09.315 [2024-12-09 14:45:47.332183] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:09.315 [2024-12-09 14:45:47.332488] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:09.315 [2024-12-09 14:45:47.332692] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:09.315 [2024-12-09 14:45:47.332703] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:09.315 [2024-12-09 14:45:47.332902] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.315 "name": "raid_bdev1", 00:13:09.315 "uuid": "ed3ea607-8779-4e58-9c8c-e23a0bd0a6fc", 00:13:09.315 "strip_size_kb": 0, 00:13:09.315 "state": "online", 00:13:09.315 "raid_level": "raid1", 00:13:09.315 "superblock": true, 00:13:09.315 "num_base_bdevs": 2, 00:13:09.315 "num_base_bdevs_discovered": 2, 00:13:09.315 "num_base_bdevs_operational": 2, 00:13:09.315 "base_bdevs_list": [ 00:13:09.315 { 00:13:09.315 "name": "BaseBdev1", 00:13:09.315 "uuid": "68fe5da6-c7e3-5c0e-9eb7-c8d9f8d82376", 00:13:09.315 "is_configured": true, 00:13:09.315 "data_offset": 2048, 00:13:09.315 "data_size": 63488 00:13:09.315 }, 00:13:09.315 { 00:13:09.315 "name": "BaseBdev2", 00:13:09.315 "uuid": "1faf690c-0814-5930-bb0b-04462bf3a9e1", 00:13:09.315 "is_configured": true, 00:13:09.315 "data_offset": 2048, 00:13:09.315 "data_size": 63488 00:13:09.315 } 00:13:09.315 ] 00:13:09.315 }' 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.315 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.882 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:09.882 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:09.882 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.882 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.882 [2024-12-09 14:45:47.809477] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:09.882 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.882 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:09.882 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:09.882 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.882 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.882 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.882 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.882 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:09.882 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:09.882 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:09.882 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:09.882 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.882 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.882 [2024-12-09 14:45:47.896977] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:09.883 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.883 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:09.883 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.883 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.883 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.883 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.883 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:09.883 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.883 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.883 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.883 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.883 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.883 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.883 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.883 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.883 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.883 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.883 "name": "raid_bdev1", 00:13:09.883 "uuid": "ed3ea607-8779-4e58-9c8c-e23a0bd0a6fc", 00:13:09.883 "strip_size_kb": 0, 00:13:09.883 "state": "online", 00:13:09.883 "raid_level": "raid1", 00:13:09.883 "superblock": true, 00:13:09.883 "num_base_bdevs": 2, 00:13:09.883 "num_base_bdevs_discovered": 1, 00:13:09.883 "num_base_bdevs_operational": 1, 00:13:09.883 "base_bdevs_list": [ 00:13:09.883 { 00:13:09.883 "name": null, 00:13:09.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.883 "is_configured": false, 00:13:09.883 "data_offset": 0, 00:13:09.883 "data_size": 63488 00:13:09.883 }, 00:13:09.883 { 00:13:09.883 "name": "BaseBdev2", 00:13:09.883 "uuid": "1faf690c-0814-5930-bb0b-04462bf3a9e1", 00:13:09.883 "is_configured": true, 00:13:09.883 "data_offset": 2048, 00:13:09.883 "data_size": 63488 00:13:09.883 } 00:13:09.883 ] 00:13:09.883 }' 00:13:09.883 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.883 14:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.883 [2024-12-09 14:45:48.002590] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:10.142 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:10.142 Zero copy mechanism will not be used. 00:13:10.142 Running I/O for 60 seconds... 00:13:10.402 14:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:10.402 14:45:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.402 14:45:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.402 [2024-12-09 14:45:48.342414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:10.402 14:45:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.402 14:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:10.402 [2024-12-09 14:45:48.418729] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:10.402 [2024-12-09 14:45:48.420981] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:10.661 [2024-12-09 14:45:48.531732] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:10.662 [2024-12-09 14:45:48.532364] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:10.662 [2024-12-09 14:45:48.667107] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:10.921 [2024-12-09 14:45:48.902676] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:10.921 [2024-12-09 14:45:48.903487] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:11.180 173.00 IOPS, 519.00 MiB/s [2024-12-09T14:45:49.302Z] [2024-12-09 14:45:49.112344] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:11.180 [2024-12-09 14:45:49.112758] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:11.439 14:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:11.439 14:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.439 14:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:11.439 14:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:11.439 14:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.439 14:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.439 14:45:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.439 14:45:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.439 14:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.439 14:45:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.439 14:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.439 "name": "raid_bdev1", 00:13:11.439 "uuid": "ed3ea607-8779-4e58-9c8c-e23a0bd0a6fc", 00:13:11.439 "strip_size_kb": 0, 00:13:11.439 "state": "online", 00:13:11.439 "raid_level": "raid1", 00:13:11.439 "superblock": true, 00:13:11.439 "num_base_bdevs": 2, 00:13:11.439 "num_base_bdevs_discovered": 2, 00:13:11.439 "num_base_bdevs_operational": 2, 00:13:11.439 "process": { 00:13:11.439 "type": "rebuild", 00:13:11.439 "target": "spare", 00:13:11.439 "progress": { 00:13:11.439 "blocks": 12288, 00:13:11.439 "percent": 19 00:13:11.439 } 00:13:11.439 }, 00:13:11.439 "base_bdevs_list": [ 00:13:11.439 { 00:13:11.439 "name": "spare", 00:13:11.439 "uuid": "90023cd7-89a7-501e-a822-2a41ac2ee329", 00:13:11.439 "is_configured": true, 00:13:11.439 "data_offset": 2048, 00:13:11.439 "data_size": 63488 00:13:11.439 }, 00:13:11.439 { 00:13:11.439 "name": "BaseBdev2", 00:13:11.439 "uuid": "1faf690c-0814-5930-bb0b-04462bf3a9e1", 00:13:11.439 "is_configured": true, 00:13:11.439 "data_offset": 2048, 00:13:11.440 "data_size": 63488 00:13:11.440 } 00:13:11.440 ] 00:13:11.440 }' 00:13:11.440 14:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.440 [2024-12-09 14:45:49.475303] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:11.440 14:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:11.440 14:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.440 14:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:11.440 14:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:11.440 14:45:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.440 14:45:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.440 [2024-12-09 14:45:49.546722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:11.700 [2024-12-09 14:45:49.594609] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:11.700 [2024-12-09 14:45:49.603930] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.700 [2024-12-09 14:45:49.604044] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:11.700 [2024-12-09 14:45:49.604082] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:11.700 [2024-12-09 14:45:49.655933] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:11.700 14:45:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.700 14:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:11.700 14:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.700 14:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.700 14:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.700 14:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.700 14:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:11.700 14:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.700 14:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.700 14:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.700 14:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.700 14:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.700 14:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.700 14:45:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.700 14:45:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.700 14:45:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.700 14:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.700 "name": "raid_bdev1", 00:13:11.700 "uuid": "ed3ea607-8779-4e58-9c8c-e23a0bd0a6fc", 00:13:11.700 "strip_size_kb": 0, 00:13:11.700 "state": "online", 00:13:11.700 "raid_level": "raid1", 00:13:11.700 "superblock": true, 00:13:11.700 "num_base_bdevs": 2, 00:13:11.700 "num_base_bdevs_discovered": 1, 00:13:11.700 "num_base_bdevs_operational": 1, 00:13:11.700 "base_bdevs_list": [ 00:13:11.700 { 00:13:11.700 "name": null, 00:13:11.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.700 "is_configured": false, 00:13:11.700 "data_offset": 0, 00:13:11.700 "data_size": 63488 00:13:11.700 }, 00:13:11.700 { 00:13:11.700 "name": "BaseBdev2", 00:13:11.700 "uuid": "1faf690c-0814-5930-bb0b-04462bf3a9e1", 00:13:11.700 "is_configured": true, 00:13:11.700 "data_offset": 2048, 00:13:11.700 "data_size": 63488 00:13:11.700 } 00:13:11.700 ] 00:13:11.700 }' 00:13:11.700 14:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.700 14:45:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.217 157.50 IOPS, 472.50 MiB/s [2024-12-09T14:45:50.339Z] 14:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:12.217 14:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.217 14:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:12.217 14:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:12.217 14:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.217 14:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.217 14:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.217 14:45:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.217 14:45:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.217 14:45:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.217 14:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.217 "name": "raid_bdev1", 00:13:12.217 "uuid": "ed3ea607-8779-4e58-9c8c-e23a0bd0a6fc", 00:13:12.217 "strip_size_kb": 0, 00:13:12.217 "state": "online", 00:13:12.217 "raid_level": "raid1", 00:13:12.217 "superblock": true, 00:13:12.217 "num_base_bdevs": 2, 00:13:12.217 "num_base_bdevs_discovered": 1, 00:13:12.217 "num_base_bdevs_operational": 1, 00:13:12.217 "base_bdevs_list": [ 00:13:12.217 { 00:13:12.217 "name": null, 00:13:12.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.217 "is_configured": false, 00:13:12.217 "data_offset": 0, 00:13:12.217 "data_size": 63488 00:13:12.217 }, 00:13:12.217 { 00:13:12.217 "name": "BaseBdev2", 00:13:12.217 "uuid": "1faf690c-0814-5930-bb0b-04462bf3a9e1", 00:13:12.217 "is_configured": true, 00:13:12.217 "data_offset": 2048, 00:13:12.217 "data_size": 63488 00:13:12.217 } 00:13:12.217 ] 00:13:12.217 }' 00:13:12.217 14:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.217 14:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:12.217 14:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.217 14:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:12.217 14:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:12.217 14:45:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.217 14:45:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.217 [2024-12-09 14:45:50.279490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:12.476 14:45:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.476 14:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:12.476 [2024-12-09 14:45:50.359194] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:12.476 [2024-12-09 14:45:50.361315] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:12.476 [2024-12-09 14:45:50.470553] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:12.476 [2024-12-09 14:45:50.471125] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:12.735 [2024-12-09 14:45:50.686946] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:12.735 [2024-12-09 14:45:50.687334] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:12.994 [2024-12-09 14:45:50.918548] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:13.252 155.67 IOPS, 467.00 MiB/s [2024-12-09T14:45:51.375Z] [2024-12-09 14:45:51.134706] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:13.253 [2024-12-09 14:45:51.135166] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:13.253 14:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:13.253 14:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.253 14:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:13.253 14:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:13.253 14:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.253 14:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.253 14:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.253 14:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.253 14:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.512 14:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.512 14:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.512 "name": "raid_bdev1", 00:13:13.512 "uuid": "ed3ea607-8779-4e58-9c8c-e23a0bd0a6fc", 00:13:13.512 "strip_size_kb": 0, 00:13:13.512 "state": "online", 00:13:13.512 "raid_level": "raid1", 00:13:13.512 "superblock": true, 00:13:13.512 "num_base_bdevs": 2, 00:13:13.512 "num_base_bdevs_discovered": 2, 00:13:13.512 "num_base_bdevs_operational": 2, 00:13:13.512 "process": { 00:13:13.512 "type": "rebuild", 00:13:13.512 "target": "spare", 00:13:13.512 "progress": { 00:13:13.512 "blocks": 12288, 00:13:13.512 "percent": 19 00:13:13.512 } 00:13:13.512 }, 00:13:13.512 "base_bdevs_list": [ 00:13:13.512 { 00:13:13.512 "name": "spare", 00:13:13.512 "uuid": "90023cd7-89a7-501e-a822-2a41ac2ee329", 00:13:13.512 "is_configured": true, 00:13:13.512 "data_offset": 2048, 00:13:13.512 "data_size": 63488 00:13:13.512 }, 00:13:13.512 { 00:13:13.512 "name": "BaseBdev2", 00:13:13.512 "uuid": "1faf690c-0814-5930-bb0b-04462bf3a9e1", 00:13:13.512 "is_configured": true, 00:13:13.512 "data_offset": 2048, 00:13:13.512 "data_size": 63488 00:13:13.512 } 00:13:13.512 ] 00:13:13.512 }' 00:13:13.512 14:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:13.512 14:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:13.512 14:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:13.512 14:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:13.512 14:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:13.512 14:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:13.512 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:13.512 14:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:13.512 14:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:13.512 14:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:13.512 14:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=423 00:13:13.512 14:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:13.512 14:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:13.512 14:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.512 14:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:13.512 14:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:13.512 14:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.513 14:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.513 14:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.513 14:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.513 14:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.513 [2024-12-09 14:45:51.504700] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:13.513 14:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.513 14:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.513 "name": "raid_bdev1", 00:13:13.513 "uuid": "ed3ea607-8779-4e58-9c8c-e23a0bd0a6fc", 00:13:13.513 "strip_size_kb": 0, 00:13:13.513 "state": "online", 00:13:13.513 "raid_level": "raid1", 00:13:13.513 "superblock": true, 00:13:13.513 "num_base_bdevs": 2, 00:13:13.513 "num_base_bdevs_discovered": 2, 00:13:13.513 "num_base_bdevs_operational": 2, 00:13:13.513 "process": { 00:13:13.513 "type": "rebuild", 00:13:13.513 "target": "spare", 00:13:13.513 "progress": { 00:13:13.513 "blocks": 14336, 00:13:13.513 "percent": 22 00:13:13.513 } 00:13:13.513 }, 00:13:13.513 "base_bdevs_list": [ 00:13:13.513 { 00:13:13.513 "name": "spare", 00:13:13.513 "uuid": "90023cd7-89a7-501e-a822-2a41ac2ee329", 00:13:13.513 "is_configured": true, 00:13:13.513 "data_offset": 2048, 00:13:13.513 "data_size": 63488 00:13:13.513 }, 00:13:13.513 { 00:13:13.513 "name": "BaseBdev2", 00:13:13.513 "uuid": "1faf690c-0814-5930-bb0b-04462bf3a9e1", 00:13:13.513 "is_configured": true, 00:13:13.513 "data_offset": 2048, 00:13:13.513 "data_size": 63488 00:13:13.513 } 00:13:13.513 ] 00:13:13.513 }' 00:13:13.513 14:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:13.513 14:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:13.513 14:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:13.513 14:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:13.513 14:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:13.772 [2024-12-09 14:45:51.831544] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:13.772 [2024-12-09 14:45:51.832297] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:14.032 133.75 IOPS, 401.25 MiB/s [2024-12-09T14:45:52.154Z] [2024-12-09 14:45:52.055755] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:14.291 [2024-12-09 14:45:52.385647] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:14.551 14:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:14.551 14:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:14.551 14:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.551 14:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:14.551 14:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:14.551 14:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.551 14:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.551 14:45:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.551 14:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.551 14:45:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.551 14:45:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.551 14:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.551 "name": "raid_bdev1", 00:13:14.551 "uuid": "ed3ea607-8779-4e58-9c8c-e23a0bd0a6fc", 00:13:14.551 "strip_size_kb": 0, 00:13:14.551 "state": "online", 00:13:14.551 "raid_level": "raid1", 00:13:14.551 "superblock": true, 00:13:14.551 "num_base_bdevs": 2, 00:13:14.551 "num_base_bdevs_discovered": 2, 00:13:14.551 "num_base_bdevs_operational": 2, 00:13:14.551 "process": { 00:13:14.551 "type": "rebuild", 00:13:14.551 "target": "spare", 00:13:14.551 "progress": { 00:13:14.551 "blocks": 32768, 00:13:14.551 "percent": 51 00:13:14.551 } 00:13:14.551 }, 00:13:14.551 "base_bdevs_list": [ 00:13:14.551 { 00:13:14.551 "name": "spare", 00:13:14.551 "uuid": "90023cd7-89a7-501e-a822-2a41ac2ee329", 00:13:14.551 "is_configured": true, 00:13:14.551 "data_offset": 2048, 00:13:14.551 "data_size": 63488 00:13:14.551 }, 00:13:14.551 { 00:13:14.551 "name": "BaseBdev2", 00:13:14.551 "uuid": "1faf690c-0814-5930-bb0b-04462bf3a9e1", 00:13:14.551 "is_configured": true, 00:13:14.551 "data_offset": 2048, 00:13:14.551 "data_size": 63488 00:13:14.551 } 00:13:14.551 ] 00:13:14.551 }' 00:13:14.811 14:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.811 [2024-12-09 14:45:52.702367] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:14.811 14:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:14.811 14:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.811 14:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:14.811 14:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:15.331 117.40 IOPS, 352.20 MiB/s [2024-12-09T14:45:53.453Z] [2024-12-09 14:45:53.351859] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:15.906 14:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:15.906 14:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:15.906 14:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:15.906 14:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:15.906 14:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:15.906 14:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:15.906 14:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.906 14:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.906 14:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.906 14:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.906 14:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.906 14:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:15.906 "name": "raid_bdev1", 00:13:15.906 "uuid": "ed3ea607-8779-4e58-9c8c-e23a0bd0a6fc", 00:13:15.906 "strip_size_kb": 0, 00:13:15.906 "state": "online", 00:13:15.906 "raid_level": "raid1", 00:13:15.906 "superblock": true, 00:13:15.906 "num_base_bdevs": 2, 00:13:15.906 "num_base_bdevs_discovered": 2, 00:13:15.906 "num_base_bdevs_operational": 2, 00:13:15.906 "process": { 00:13:15.906 "type": "rebuild", 00:13:15.906 "target": "spare", 00:13:15.906 "progress": { 00:13:15.906 "blocks": 51200, 00:13:15.906 "percent": 80 00:13:15.906 } 00:13:15.906 }, 00:13:15.906 "base_bdevs_list": [ 00:13:15.906 { 00:13:15.906 "name": "spare", 00:13:15.906 "uuid": "90023cd7-89a7-501e-a822-2a41ac2ee329", 00:13:15.906 "is_configured": true, 00:13:15.906 "data_offset": 2048, 00:13:15.906 "data_size": 63488 00:13:15.906 }, 00:13:15.906 { 00:13:15.906 "name": "BaseBdev2", 00:13:15.906 "uuid": "1faf690c-0814-5930-bb0b-04462bf3a9e1", 00:13:15.906 "is_configured": true, 00:13:15.906 "data_offset": 2048, 00:13:15.906 "data_size": 63488 00:13:15.906 } 00:13:15.906 ] 00:13:15.906 }' 00:13:15.906 14:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.906 14:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:15.906 14:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:15.906 14:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:15.906 14:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:16.174 106.83 IOPS, 320.50 MiB/s [2024-12-09T14:45:54.296Z] [2024-12-09 14:45:54.085332] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:16.174 [2024-12-09 14:45:54.200515] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:13:16.434 [2024-12-09 14:45:54.416861] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:16.434 [2024-12-09 14:45:54.516694] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:16.434 [2024-12-09 14:45:54.519445] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.002 14:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:17.002 14:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:17.002 14:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:17.002 14:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:17.002 14:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:17.002 14:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:17.002 14:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.002 14:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.002 14:45:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.002 14:45:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.002 14:45:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.002 14:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:17.002 "name": "raid_bdev1", 00:13:17.002 "uuid": "ed3ea607-8779-4e58-9c8c-e23a0bd0a6fc", 00:13:17.002 "strip_size_kb": 0, 00:13:17.002 "state": "online", 00:13:17.002 "raid_level": "raid1", 00:13:17.002 "superblock": true, 00:13:17.002 "num_base_bdevs": 2, 00:13:17.002 "num_base_bdevs_discovered": 2, 00:13:17.002 "num_base_bdevs_operational": 2, 00:13:17.002 "base_bdevs_list": [ 00:13:17.002 { 00:13:17.002 "name": "spare", 00:13:17.002 "uuid": "90023cd7-89a7-501e-a822-2a41ac2ee329", 00:13:17.002 "is_configured": true, 00:13:17.002 "data_offset": 2048, 00:13:17.002 "data_size": 63488 00:13:17.002 }, 00:13:17.002 { 00:13:17.002 "name": "BaseBdev2", 00:13:17.002 "uuid": "1faf690c-0814-5930-bb0b-04462bf3a9e1", 00:13:17.002 "is_configured": true, 00:13:17.002 "data_offset": 2048, 00:13:17.002 "data_size": 63488 00:13:17.002 } 00:13:17.002 ] 00:13:17.002 }' 00:13:17.002 14:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:17.002 97.29 IOPS, 291.86 MiB/s [2024-12-09T14:45:55.124Z] 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:17.002 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:17.002 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:17.002 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:17.002 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:17.002 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:17.002 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:17.002 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:17.003 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:17.003 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.003 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.003 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.003 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.003 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.003 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:17.003 "name": "raid_bdev1", 00:13:17.003 "uuid": "ed3ea607-8779-4e58-9c8c-e23a0bd0a6fc", 00:13:17.003 "strip_size_kb": 0, 00:13:17.003 "state": "online", 00:13:17.003 "raid_level": "raid1", 00:13:17.003 "superblock": true, 00:13:17.003 "num_base_bdevs": 2, 00:13:17.003 "num_base_bdevs_discovered": 2, 00:13:17.003 "num_base_bdevs_operational": 2, 00:13:17.003 "base_bdevs_list": [ 00:13:17.003 { 00:13:17.003 "name": "spare", 00:13:17.003 "uuid": "90023cd7-89a7-501e-a822-2a41ac2ee329", 00:13:17.003 "is_configured": true, 00:13:17.003 "data_offset": 2048, 00:13:17.003 "data_size": 63488 00:13:17.003 }, 00:13:17.003 { 00:13:17.003 "name": "BaseBdev2", 00:13:17.003 "uuid": "1faf690c-0814-5930-bb0b-04462bf3a9e1", 00:13:17.003 "is_configured": true, 00:13:17.003 "data_offset": 2048, 00:13:17.003 "data_size": 63488 00:13:17.003 } 00:13:17.003 ] 00:13:17.003 }' 00:13:17.003 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:17.262 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:17.262 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:17.262 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:17.262 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:17.262 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:17.262 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.262 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.262 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.262 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:17.262 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.262 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.262 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.262 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.262 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.262 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.262 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.262 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.262 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.262 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.262 "name": "raid_bdev1", 00:13:17.262 "uuid": "ed3ea607-8779-4e58-9c8c-e23a0bd0a6fc", 00:13:17.262 "strip_size_kb": 0, 00:13:17.262 "state": "online", 00:13:17.262 "raid_level": "raid1", 00:13:17.262 "superblock": true, 00:13:17.262 "num_base_bdevs": 2, 00:13:17.262 "num_base_bdevs_discovered": 2, 00:13:17.262 "num_base_bdevs_operational": 2, 00:13:17.262 "base_bdevs_list": [ 00:13:17.262 { 00:13:17.262 "name": "spare", 00:13:17.262 "uuid": "90023cd7-89a7-501e-a822-2a41ac2ee329", 00:13:17.262 "is_configured": true, 00:13:17.262 "data_offset": 2048, 00:13:17.262 "data_size": 63488 00:13:17.262 }, 00:13:17.262 { 00:13:17.262 "name": "BaseBdev2", 00:13:17.262 "uuid": "1faf690c-0814-5930-bb0b-04462bf3a9e1", 00:13:17.262 "is_configured": true, 00:13:17.262 "data_offset": 2048, 00:13:17.262 "data_size": 63488 00:13:17.262 } 00:13:17.262 ] 00:13:17.262 }' 00:13:17.262 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.262 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.830 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:17.830 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.830 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.830 [2024-12-09 14:45:55.646868] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:17.830 [2024-12-09 14:45:55.646902] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:17.830 00:13:17.830 Latency(us) 00:13:17.830 [2024-12-09T14:45:55.952Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.830 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:17.830 raid_bdev1 : 7.75 91.20 273.59 0.00 0.00 14113.48 325.53 114473.36 00:13:17.830 [2024-12-09T14:45:55.952Z] =================================================================================================================== 00:13:17.830 [2024-12-09T14:45:55.952Z] Total : 91.20 273.59 0.00 0.00 14113.48 325.53 114473.36 00:13:17.830 [2024-12-09 14:45:55.769612] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:17.830 [2024-12-09 14:45:55.769799] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.830 [2024-12-09 14:45:55.769899] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:17.830 [2024-12-09 14:45:55.769911] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:17.830 { 00:13:17.830 "results": [ 00:13:17.830 { 00:13:17.830 "job": "raid_bdev1", 00:13:17.830 "core_mask": "0x1", 00:13:17.830 "workload": "randrw", 00:13:17.830 "percentage": 50, 00:13:17.830 "status": "finished", 00:13:17.830 "queue_depth": 2, 00:13:17.830 "io_size": 3145728, 00:13:17.830 "runtime": 7.752502, 00:13:17.830 "iops": 91.19636473489462, 00:13:17.830 "mibps": 273.5890942046839, 00:13:17.830 "io_failed": 0, 00:13:17.830 "io_timeout": 0, 00:13:17.830 "avg_latency_us": 14113.484306035094, 00:13:17.830 "min_latency_us": 325.5336244541485, 00:13:17.830 "max_latency_us": 114473.36244541485 00:13:17.830 } 00:13:17.830 ], 00:13:17.830 "core_count": 1 00:13:17.830 } 00:13:17.830 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.830 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.830 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:17.830 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.830 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.830 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.830 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:17.830 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:17.830 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:17.830 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:17.830 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:17.830 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:17.830 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:17.830 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:17.830 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:17.830 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:17.830 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:17.830 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:17.830 14:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:18.090 /dev/nbd0 00:13:18.090 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:18.090 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:18.090 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:18.090 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:18.090 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:18.090 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:18.090 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:18.090 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:18.091 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:18.091 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:18.091 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:18.091 1+0 records in 00:13:18.091 1+0 records out 00:13:18.091 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00062035 s, 6.6 MB/s 00:13:18.091 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.091 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:18.091 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.091 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:18.091 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:18.091 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:18.091 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:18.091 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:18.091 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:18.091 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:18.091 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:18.091 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:18.091 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:18.091 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:18.091 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:18.091 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:18.091 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:18.091 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:18.091 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:18.350 /dev/nbd1 00:13:18.350 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:18.350 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:18.350 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:18.350 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:18.350 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:18.350 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:18.350 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:18.350 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:18.350 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:18.350 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:18.350 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:18.350 1+0 records in 00:13:18.350 1+0 records out 00:13:18.350 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000475168 s, 8.6 MB/s 00:13:18.350 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.350 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:18.350 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.350 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:18.350 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:18.350 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:18.350 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:18.350 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:18.610 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:18.610 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:18.610 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:18.610 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:18.610 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:18.610 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:18.610 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:18.869 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:18.869 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:18.869 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:18.869 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:18.869 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:18.869 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:18.869 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:18.869 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:18.869 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:18.869 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:18.869 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:18.869 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:18.869 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:18.869 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:18.869 14:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:19.128 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:19.128 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:19.128 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:19.128 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:19.128 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:19.128 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:19.128 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:19.128 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:19.128 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:19.128 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:19.128 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.128 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.128 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.128 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:19.128 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.128 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.128 [2024-12-09 14:45:57.156555] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:19.128 [2024-12-09 14:45:57.156628] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.128 [2024-12-09 14:45:57.156658] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:19.128 [2024-12-09 14:45:57.156680] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.128 [2024-12-09 14:45:57.159149] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.128 [2024-12-09 14:45:57.159194] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:19.128 [2024-12-09 14:45:57.159317] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:19.128 [2024-12-09 14:45:57.159378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:19.128 [2024-12-09 14:45:57.159534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:19.128 spare 00:13:19.128 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.129 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:19.129 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.129 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.388 [2024-12-09 14:45:57.259475] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:19.388 [2024-12-09 14:45:57.259541] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:19.388 [2024-12-09 14:45:57.259953] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:13:19.388 [2024-12-09 14:45:57.260208] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:19.388 [2024-12-09 14:45:57.260229] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:19.388 [2024-12-09 14:45:57.260505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:19.388 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.388 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:19.388 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:19.388 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.388 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.388 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.388 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:19.388 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.388 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.388 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.388 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.388 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.388 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.388 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.388 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.388 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.388 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.388 "name": "raid_bdev1", 00:13:19.388 "uuid": "ed3ea607-8779-4e58-9c8c-e23a0bd0a6fc", 00:13:19.388 "strip_size_kb": 0, 00:13:19.388 "state": "online", 00:13:19.388 "raid_level": "raid1", 00:13:19.388 "superblock": true, 00:13:19.388 "num_base_bdevs": 2, 00:13:19.388 "num_base_bdevs_discovered": 2, 00:13:19.388 "num_base_bdevs_operational": 2, 00:13:19.388 "base_bdevs_list": [ 00:13:19.388 { 00:13:19.388 "name": "spare", 00:13:19.388 "uuid": "90023cd7-89a7-501e-a822-2a41ac2ee329", 00:13:19.388 "is_configured": true, 00:13:19.388 "data_offset": 2048, 00:13:19.388 "data_size": 63488 00:13:19.388 }, 00:13:19.388 { 00:13:19.388 "name": "BaseBdev2", 00:13:19.388 "uuid": "1faf690c-0814-5930-bb0b-04462bf3a9e1", 00:13:19.388 "is_configured": true, 00:13:19.388 "data_offset": 2048, 00:13:19.388 "data_size": 63488 00:13:19.388 } 00:13:19.388 ] 00:13:19.388 }' 00:13:19.388 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.388 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.648 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:19.648 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.648 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:19.648 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:19.648 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.648 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.648 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.648 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.648 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.648 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.648 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.648 "name": "raid_bdev1", 00:13:19.648 "uuid": "ed3ea607-8779-4e58-9c8c-e23a0bd0a6fc", 00:13:19.648 "strip_size_kb": 0, 00:13:19.648 "state": "online", 00:13:19.648 "raid_level": "raid1", 00:13:19.648 "superblock": true, 00:13:19.648 "num_base_bdevs": 2, 00:13:19.648 "num_base_bdevs_discovered": 2, 00:13:19.648 "num_base_bdevs_operational": 2, 00:13:19.648 "base_bdevs_list": [ 00:13:19.648 { 00:13:19.648 "name": "spare", 00:13:19.648 "uuid": "90023cd7-89a7-501e-a822-2a41ac2ee329", 00:13:19.648 "is_configured": true, 00:13:19.648 "data_offset": 2048, 00:13:19.648 "data_size": 63488 00:13:19.648 }, 00:13:19.648 { 00:13:19.648 "name": "BaseBdev2", 00:13:19.648 "uuid": "1faf690c-0814-5930-bb0b-04462bf3a9e1", 00:13:19.648 "is_configured": true, 00:13:19.648 "data_offset": 2048, 00:13:19.648 "data_size": 63488 00:13:19.648 } 00:13:19.648 ] 00:13:19.648 }' 00:13:19.648 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.648 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:19.648 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.907 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:19.907 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:19.907 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.907 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.907 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.907 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.907 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:19.907 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:19.907 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.907 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.907 [2024-12-09 14:45:57.843719] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:19.907 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.907 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:19.907 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:19.907 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.907 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.907 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.907 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:19.907 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.907 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.907 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.907 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.907 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.907 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.907 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.907 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.907 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.907 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.907 "name": "raid_bdev1", 00:13:19.907 "uuid": "ed3ea607-8779-4e58-9c8c-e23a0bd0a6fc", 00:13:19.907 "strip_size_kb": 0, 00:13:19.907 "state": "online", 00:13:19.907 "raid_level": "raid1", 00:13:19.907 "superblock": true, 00:13:19.907 "num_base_bdevs": 2, 00:13:19.907 "num_base_bdevs_discovered": 1, 00:13:19.907 "num_base_bdevs_operational": 1, 00:13:19.907 "base_bdevs_list": [ 00:13:19.907 { 00:13:19.907 "name": null, 00:13:19.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.907 "is_configured": false, 00:13:19.907 "data_offset": 0, 00:13:19.907 "data_size": 63488 00:13:19.907 }, 00:13:19.907 { 00:13:19.907 "name": "BaseBdev2", 00:13:19.907 "uuid": "1faf690c-0814-5930-bb0b-04462bf3a9e1", 00:13:19.907 "is_configured": true, 00:13:19.907 "data_offset": 2048, 00:13:19.907 "data_size": 63488 00:13:19.907 } 00:13:19.907 ] 00:13:19.907 }' 00:13:19.907 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.907 14:45:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.475 14:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:20.475 14:45:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.475 14:45:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.475 [2024-12-09 14:45:58.311135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:20.475 [2024-12-09 14:45:58.311382] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:20.475 [2024-12-09 14:45:58.311412] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:20.475 [2024-12-09 14:45:58.311450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:20.475 [2024-12-09 14:45:58.331382] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:13:20.475 14:45:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.475 14:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:20.475 [2024-12-09 14:45:58.333616] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:21.412 14:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:21.412 14:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.412 14:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:21.412 14:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:21.413 14:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.413 14:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.413 14:45:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.413 14:45:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.413 14:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.413 14:45:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.413 14:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.413 "name": "raid_bdev1", 00:13:21.413 "uuid": "ed3ea607-8779-4e58-9c8c-e23a0bd0a6fc", 00:13:21.413 "strip_size_kb": 0, 00:13:21.413 "state": "online", 00:13:21.413 "raid_level": "raid1", 00:13:21.413 "superblock": true, 00:13:21.413 "num_base_bdevs": 2, 00:13:21.413 "num_base_bdevs_discovered": 2, 00:13:21.413 "num_base_bdevs_operational": 2, 00:13:21.413 "process": { 00:13:21.413 "type": "rebuild", 00:13:21.413 "target": "spare", 00:13:21.413 "progress": { 00:13:21.413 "blocks": 20480, 00:13:21.413 "percent": 32 00:13:21.413 } 00:13:21.413 }, 00:13:21.413 "base_bdevs_list": [ 00:13:21.413 { 00:13:21.413 "name": "spare", 00:13:21.413 "uuid": "90023cd7-89a7-501e-a822-2a41ac2ee329", 00:13:21.413 "is_configured": true, 00:13:21.413 "data_offset": 2048, 00:13:21.413 "data_size": 63488 00:13:21.413 }, 00:13:21.413 { 00:13:21.413 "name": "BaseBdev2", 00:13:21.413 "uuid": "1faf690c-0814-5930-bb0b-04462bf3a9e1", 00:13:21.413 "is_configured": true, 00:13:21.413 "data_offset": 2048, 00:13:21.413 "data_size": 63488 00:13:21.413 } 00:13:21.413 ] 00:13:21.413 }' 00:13:21.413 14:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.413 14:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:21.413 14:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.413 14:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:21.413 14:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:21.413 14:45:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.413 14:45:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.413 [2024-12-09 14:45:59.473049] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:21.673 [2024-12-09 14:45:59.540012] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:21.673 [2024-12-09 14:45:59.540105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:21.673 [2024-12-09 14:45:59.540123] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:21.673 [2024-12-09 14:45:59.540134] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:21.673 14:45:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.673 14:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:21.673 14:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:21.673 14:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:21.673 14:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:21.673 14:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:21.673 14:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:21.673 14:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.673 14:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.673 14:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.673 14:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.674 14:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.674 14:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.674 14:45:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.674 14:45:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.674 14:45:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.674 14:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.674 "name": "raid_bdev1", 00:13:21.674 "uuid": "ed3ea607-8779-4e58-9c8c-e23a0bd0a6fc", 00:13:21.674 "strip_size_kb": 0, 00:13:21.674 "state": "online", 00:13:21.674 "raid_level": "raid1", 00:13:21.674 "superblock": true, 00:13:21.674 "num_base_bdevs": 2, 00:13:21.674 "num_base_bdevs_discovered": 1, 00:13:21.674 "num_base_bdevs_operational": 1, 00:13:21.674 "base_bdevs_list": [ 00:13:21.674 { 00:13:21.674 "name": null, 00:13:21.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.674 "is_configured": false, 00:13:21.674 "data_offset": 0, 00:13:21.674 "data_size": 63488 00:13:21.674 }, 00:13:21.674 { 00:13:21.674 "name": "BaseBdev2", 00:13:21.674 "uuid": "1faf690c-0814-5930-bb0b-04462bf3a9e1", 00:13:21.674 "is_configured": true, 00:13:21.674 "data_offset": 2048, 00:13:21.674 "data_size": 63488 00:13:21.674 } 00:13:21.674 ] 00:13:21.674 }' 00:13:21.674 14:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.674 14:45:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.245 14:46:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:22.245 14:46:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.245 14:46:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.245 [2024-12-09 14:46:00.098698] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:22.245 [2024-12-09 14:46:00.098790] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.245 [2024-12-09 14:46:00.098818] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:22.245 [2024-12-09 14:46:00.098831] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.245 [2024-12-09 14:46:00.099403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.245 [2024-12-09 14:46:00.099430] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:22.245 [2024-12-09 14:46:00.099538] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:22.245 [2024-12-09 14:46:00.099556] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:22.245 [2024-12-09 14:46:00.099567] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:22.245 [2024-12-09 14:46:00.099606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:22.245 [2024-12-09 14:46:00.119004] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:13:22.245 spare 00:13:22.245 14:46:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.245 14:46:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:22.245 [2024-12-09 14:46:00.121307] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:23.183 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:23.183 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.183 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:23.183 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:23.183 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.183 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.183 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.183 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.183 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.183 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.183 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.183 "name": "raid_bdev1", 00:13:23.183 "uuid": "ed3ea607-8779-4e58-9c8c-e23a0bd0a6fc", 00:13:23.183 "strip_size_kb": 0, 00:13:23.183 "state": "online", 00:13:23.183 "raid_level": "raid1", 00:13:23.183 "superblock": true, 00:13:23.183 "num_base_bdevs": 2, 00:13:23.183 "num_base_bdevs_discovered": 2, 00:13:23.183 "num_base_bdevs_operational": 2, 00:13:23.183 "process": { 00:13:23.183 "type": "rebuild", 00:13:23.183 "target": "spare", 00:13:23.183 "progress": { 00:13:23.183 "blocks": 20480, 00:13:23.183 "percent": 32 00:13:23.183 } 00:13:23.183 }, 00:13:23.183 "base_bdevs_list": [ 00:13:23.183 { 00:13:23.183 "name": "spare", 00:13:23.183 "uuid": "90023cd7-89a7-501e-a822-2a41ac2ee329", 00:13:23.183 "is_configured": true, 00:13:23.183 "data_offset": 2048, 00:13:23.183 "data_size": 63488 00:13:23.183 }, 00:13:23.183 { 00:13:23.183 "name": "BaseBdev2", 00:13:23.183 "uuid": "1faf690c-0814-5930-bb0b-04462bf3a9e1", 00:13:23.183 "is_configured": true, 00:13:23.183 "data_offset": 2048, 00:13:23.183 "data_size": 63488 00:13:23.183 } 00:13:23.183 ] 00:13:23.183 }' 00:13:23.183 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.183 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:23.183 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.183 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:23.183 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:23.183 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.183 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.183 [2024-12-09 14:46:01.288630] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:23.442 [2024-12-09 14:46:01.327649] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:23.442 [2024-12-09 14:46:01.327744] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.442 [2024-12-09 14:46:01.327769] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:23.442 [2024-12-09 14:46:01.327778] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:23.442 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.442 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:23.442 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:23.442 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.442 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.442 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.442 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:23.442 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.442 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.442 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.442 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.442 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.442 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.442 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.442 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.442 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.442 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.442 "name": "raid_bdev1", 00:13:23.442 "uuid": "ed3ea607-8779-4e58-9c8c-e23a0bd0a6fc", 00:13:23.442 "strip_size_kb": 0, 00:13:23.442 "state": "online", 00:13:23.442 "raid_level": "raid1", 00:13:23.442 "superblock": true, 00:13:23.442 "num_base_bdevs": 2, 00:13:23.442 "num_base_bdevs_discovered": 1, 00:13:23.442 "num_base_bdevs_operational": 1, 00:13:23.442 "base_bdevs_list": [ 00:13:23.442 { 00:13:23.442 "name": null, 00:13:23.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.442 "is_configured": false, 00:13:23.442 "data_offset": 0, 00:13:23.442 "data_size": 63488 00:13:23.442 }, 00:13:23.442 { 00:13:23.442 "name": "BaseBdev2", 00:13:23.442 "uuid": "1faf690c-0814-5930-bb0b-04462bf3a9e1", 00:13:23.442 "is_configured": true, 00:13:23.442 "data_offset": 2048, 00:13:23.442 "data_size": 63488 00:13:23.442 } 00:13:23.442 ] 00:13:23.442 }' 00:13:23.442 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.442 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.701 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:23.701 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.701 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:23.701 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:23.701 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.960 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.960 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.960 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.960 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.960 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.960 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.960 "name": "raid_bdev1", 00:13:23.960 "uuid": "ed3ea607-8779-4e58-9c8c-e23a0bd0a6fc", 00:13:23.960 "strip_size_kb": 0, 00:13:23.960 "state": "online", 00:13:23.960 "raid_level": "raid1", 00:13:23.960 "superblock": true, 00:13:23.960 "num_base_bdevs": 2, 00:13:23.960 "num_base_bdevs_discovered": 1, 00:13:23.960 "num_base_bdevs_operational": 1, 00:13:23.960 "base_bdevs_list": [ 00:13:23.960 { 00:13:23.960 "name": null, 00:13:23.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.960 "is_configured": false, 00:13:23.960 "data_offset": 0, 00:13:23.960 "data_size": 63488 00:13:23.960 }, 00:13:23.960 { 00:13:23.960 "name": "BaseBdev2", 00:13:23.960 "uuid": "1faf690c-0814-5930-bb0b-04462bf3a9e1", 00:13:23.960 "is_configured": true, 00:13:23.960 "data_offset": 2048, 00:13:23.960 "data_size": 63488 00:13:23.960 } 00:13:23.960 ] 00:13:23.960 }' 00:13:23.960 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.960 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:23.960 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.960 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:23.960 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:23.960 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.960 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.960 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.960 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:23.960 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.960 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.960 [2024-12-09 14:46:01.959043] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:23.960 [2024-12-09 14:46:01.959116] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:23.960 [2024-12-09 14:46:01.959153] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:23.960 [2024-12-09 14:46:01.959165] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:23.960 [2024-12-09 14:46:01.959686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:23.960 [2024-12-09 14:46:01.959772] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:23.960 [2024-12-09 14:46:01.959881] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:23.960 [2024-12-09 14:46:01.959898] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:23.960 [2024-12-09 14:46:01.959909] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:23.960 [2024-12-09 14:46:01.959920] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:23.960 BaseBdev1 00:13:23.960 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.960 14:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:24.896 14:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:24.896 14:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:24.896 14:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.896 14:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:24.896 14:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:24.896 14:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:24.896 14:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.896 14:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.896 14:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.896 14:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.896 14:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.896 14:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.896 14:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.896 14:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.896 14:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.154 14:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.154 "name": "raid_bdev1", 00:13:25.154 "uuid": "ed3ea607-8779-4e58-9c8c-e23a0bd0a6fc", 00:13:25.154 "strip_size_kb": 0, 00:13:25.154 "state": "online", 00:13:25.154 "raid_level": "raid1", 00:13:25.154 "superblock": true, 00:13:25.154 "num_base_bdevs": 2, 00:13:25.154 "num_base_bdevs_discovered": 1, 00:13:25.154 "num_base_bdevs_operational": 1, 00:13:25.154 "base_bdevs_list": [ 00:13:25.154 { 00:13:25.154 "name": null, 00:13:25.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.154 "is_configured": false, 00:13:25.154 "data_offset": 0, 00:13:25.154 "data_size": 63488 00:13:25.154 }, 00:13:25.154 { 00:13:25.154 "name": "BaseBdev2", 00:13:25.154 "uuid": "1faf690c-0814-5930-bb0b-04462bf3a9e1", 00:13:25.154 "is_configured": true, 00:13:25.154 "data_offset": 2048, 00:13:25.154 "data_size": 63488 00:13:25.154 } 00:13:25.154 ] 00:13:25.154 }' 00:13:25.154 14:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.154 14:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.413 14:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:25.413 14:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.413 14:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:25.413 14:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:25.413 14:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.413 14:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.413 14:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.413 14:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.413 14:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.413 14:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.413 14:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.413 "name": "raid_bdev1", 00:13:25.413 "uuid": "ed3ea607-8779-4e58-9c8c-e23a0bd0a6fc", 00:13:25.413 "strip_size_kb": 0, 00:13:25.413 "state": "online", 00:13:25.413 "raid_level": "raid1", 00:13:25.413 "superblock": true, 00:13:25.413 "num_base_bdevs": 2, 00:13:25.413 "num_base_bdevs_discovered": 1, 00:13:25.413 "num_base_bdevs_operational": 1, 00:13:25.413 "base_bdevs_list": [ 00:13:25.413 { 00:13:25.413 "name": null, 00:13:25.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.413 "is_configured": false, 00:13:25.413 "data_offset": 0, 00:13:25.413 "data_size": 63488 00:13:25.413 }, 00:13:25.413 { 00:13:25.413 "name": "BaseBdev2", 00:13:25.413 "uuid": "1faf690c-0814-5930-bb0b-04462bf3a9e1", 00:13:25.413 "is_configured": true, 00:13:25.413 "data_offset": 2048, 00:13:25.413 "data_size": 63488 00:13:25.413 } 00:13:25.413 ] 00:13:25.413 }' 00:13:25.413 14:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.413 14:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:25.413 14:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.413 14:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:25.413 14:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:25.413 14:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:13:25.413 14:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:25.413 14:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:25.413 14:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:25.413 14:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:25.673 14:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:25.673 14:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:25.673 14:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.673 14:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.673 [2024-12-09 14:46:03.540662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:25.673 [2024-12-09 14:46:03.540855] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:25.673 [2024-12-09 14:46:03.540873] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:25.673 request: 00:13:25.673 { 00:13:25.673 "base_bdev": "BaseBdev1", 00:13:25.673 "raid_bdev": "raid_bdev1", 00:13:25.673 "method": "bdev_raid_add_base_bdev", 00:13:25.673 "req_id": 1 00:13:25.673 } 00:13:25.673 Got JSON-RPC error response 00:13:25.673 response: 00:13:25.673 { 00:13:25.673 "code": -22, 00:13:25.673 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:25.673 } 00:13:25.673 14:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:25.673 14:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:13:25.673 14:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:25.673 14:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:25.673 14:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:25.673 14:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:26.610 14:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:26.610 14:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:26.610 14:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:26.610 14:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.610 14:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.610 14:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:26.610 14:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.610 14:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.610 14:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.610 14:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.610 14:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.610 14:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.610 14:46:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.610 14:46:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.610 14:46:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.610 14:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.610 "name": "raid_bdev1", 00:13:26.610 "uuid": "ed3ea607-8779-4e58-9c8c-e23a0bd0a6fc", 00:13:26.610 "strip_size_kb": 0, 00:13:26.610 "state": "online", 00:13:26.610 "raid_level": "raid1", 00:13:26.610 "superblock": true, 00:13:26.610 "num_base_bdevs": 2, 00:13:26.610 "num_base_bdevs_discovered": 1, 00:13:26.610 "num_base_bdevs_operational": 1, 00:13:26.610 "base_bdevs_list": [ 00:13:26.610 { 00:13:26.610 "name": null, 00:13:26.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.610 "is_configured": false, 00:13:26.610 "data_offset": 0, 00:13:26.610 "data_size": 63488 00:13:26.610 }, 00:13:26.610 { 00:13:26.610 "name": "BaseBdev2", 00:13:26.610 "uuid": "1faf690c-0814-5930-bb0b-04462bf3a9e1", 00:13:26.610 "is_configured": true, 00:13:26.610 "data_offset": 2048, 00:13:26.610 "data_size": 63488 00:13:26.610 } 00:13:26.610 ] 00:13:26.610 }' 00:13:26.610 14:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.610 14:46:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.178 14:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:27.178 14:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.178 14:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:27.178 14:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:27.178 14:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.178 14:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.178 14:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.178 14:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.178 14:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.178 14:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.178 14:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.178 "name": "raid_bdev1", 00:13:27.178 "uuid": "ed3ea607-8779-4e58-9c8c-e23a0bd0a6fc", 00:13:27.178 "strip_size_kb": 0, 00:13:27.178 "state": "online", 00:13:27.178 "raid_level": "raid1", 00:13:27.178 "superblock": true, 00:13:27.178 "num_base_bdevs": 2, 00:13:27.178 "num_base_bdevs_discovered": 1, 00:13:27.178 "num_base_bdevs_operational": 1, 00:13:27.178 "base_bdevs_list": [ 00:13:27.178 { 00:13:27.178 "name": null, 00:13:27.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.178 "is_configured": false, 00:13:27.178 "data_offset": 0, 00:13:27.178 "data_size": 63488 00:13:27.178 }, 00:13:27.178 { 00:13:27.178 "name": "BaseBdev2", 00:13:27.178 "uuid": "1faf690c-0814-5930-bb0b-04462bf3a9e1", 00:13:27.178 "is_configured": true, 00:13:27.178 "data_offset": 2048, 00:13:27.178 "data_size": 63488 00:13:27.178 } 00:13:27.178 ] 00:13:27.178 }' 00:13:27.178 14:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.178 14:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:27.178 14:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.178 14:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:27.178 14:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 78159 00:13:27.178 14:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 78159 ']' 00:13:27.178 14:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 78159 00:13:27.178 14:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:13:27.178 14:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:27.178 14:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78159 00:13:27.178 14:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:27.178 14:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:27.178 14:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78159' 00:13:27.178 killing process with pid 78159 00:13:27.178 Received shutdown signal, test time was about 17.152490 seconds 00:13:27.178 00:13:27.178 Latency(us) 00:13:27.178 [2024-12-09T14:46:05.301Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:27.179 [2024-12-09T14:46:05.301Z] =================================================================================================================== 00:13:27.179 [2024-12-09T14:46:05.301Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:27.179 14:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 78159 00:13:27.179 [2024-12-09 14:46:05.124505] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:27.179 [2024-12-09 14:46:05.124646] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:27.179 14:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 78159 00:13:27.179 [2024-12-09 14:46:05.124702] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:27.179 [2024-12-09 14:46:05.124717] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:27.438 [2024-12-09 14:46:05.360827] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:28.818 00:13:28.818 real 0m20.515s 00:13:28.818 user 0m26.841s 00:13:28.818 sys 0m2.289s 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:28.818 ************************************ 00:13:28.818 END TEST raid_rebuild_test_sb_io 00:13:28.818 ************************************ 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.818 14:46:06 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:28.818 14:46:06 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:13:28.818 14:46:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:28.818 14:46:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:28.818 14:46:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:28.818 ************************************ 00:13:28.818 START TEST raid_rebuild_test 00:13:28.818 ************************************ 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=78851 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 78851 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 78851 ']' 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:28.818 14:46:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.818 [2024-12-09 14:46:06.799875] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:13:28.818 [2024-12-09 14:46:06.800074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:28.818 Zero copy mechanism will not be used. 00:13:28.818 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78851 ] 00:13:29.078 [2024-12-09 14:46:06.976773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.078 [2024-12-09 14:46:07.098202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.338 [2024-12-09 14:46:07.300461] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:29.338 [2024-12-09 14:46:07.300619] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:29.598 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:29.598 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:13:29.598 14:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:29.598 14:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:29.598 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.598 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.598 BaseBdev1_malloc 00:13:29.598 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.598 14:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:29.598 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.598 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.598 [2024-12-09 14:46:07.683868] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:29.598 [2024-12-09 14:46:07.683985] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.598 [2024-12-09 14:46:07.684025] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:29.598 [2024-12-09 14:46:07.684069] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.598 [2024-12-09 14:46:07.686393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.598 [2024-12-09 14:46:07.686475] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:29.598 BaseBdev1 00:13:29.598 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.598 14:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:29.598 14:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:29.598 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.598 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.858 BaseBdev2_malloc 00:13:29.858 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.858 14:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:29.858 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.858 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.858 [2024-12-09 14:46:07.740259] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:29.858 [2024-12-09 14:46:07.740327] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.858 [2024-12-09 14:46:07.740352] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:29.858 [2024-12-09 14:46:07.740364] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.858 [2024-12-09 14:46:07.742709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.858 [2024-12-09 14:46:07.742758] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:29.858 BaseBdev2 00:13:29.858 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.858 14:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:29.858 14:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.859 BaseBdev3_malloc 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.859 [2024-12-09 14:46:07.808782] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:29.859 [2024-12-09 14:46:07.808871] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.859 [2024-12-09 14:46:07.808907] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:29.859 [2024-12-09 14:46:07.808918] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.859 [2024-12-09 14:46:07.811083] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.859 [2024-12-09 14:46:07.811198] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:29.859 BaseBdev3 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.859 BaseBdev4_malloc 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.859 [2024-12-09 14:46:07.866324] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:29.859 [2024-12-09 14:46:07.866453] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.859 [2024-12-09 14:46:07.866503] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:29.859 [2024-12-09 14:46:07.866542] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.859 [2024-12-09 14:46:07.868881] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.859 [2024-12-09 14:46:07.868959] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:29.859 BaseBdev4 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.859 spare_malloc 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.859 spare_delay 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.859 [2024-12-09 14:46:07.927014] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:29.859 [2024-12-09 14:46:07.927124] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.859 [2024-12-09 14:46:07.927166] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:29.859 [2024-12-09 14:46:07.927178] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.859 [2024-12-09 14:46:07.929462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.859 [2024-12-09 14:46:07.929504] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:29.859 spare 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.859 [2024-12-09 14:46:07.935043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:29.859 [2024-12-09 14:46:07.936976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:29.859 [2024-12-09 14:46:07.937038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:29.859 [2024-12-09 14:46:07.937089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:29.859 [2024-12-09 14:46:07.937175] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:29.859 [2024-12-09 14:46:07.937187] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:29.859 [2024-12-09 14:46:07.937423] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:29.859 [2024-12-09 14:46:07.937578] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:29.859 [2024-12-09 14:46:07.937606] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:29.859 [2024-12-09 14:46:07.937784] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.859 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.119 14:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.119 "name": "raid_bdev1", 00:13:30.119 "uuid": "a084cf82-6c88-4bf7-91e5-b94f70218474", 00:13:30.119 "strip_size_kb": 0, 00:13:30.119 "state": "online", 00:13:30.119 "raid_level": "raid1", 00:13:30.119 "superblock": false, 00:13:30.119 "num_base_bdevs": 4, 00:13:30.119 "num_base_bdevs_discovered": 4, 00:13:30.119 "num_base_bdevs_operational": 4, 00:13:30.119 "base_bdevs_list": [ 00:13:30.119 { 00:13:30.119 "name": "BaseBdev1", 00:13:30.119 "uuid": "ee2bb298-31a8-5deb-a2a7-f455a11982eb", 00:13:30.119 "is_configured": true, 00:13:30.119 "data_offset": 0, 00:13:30.119 "data_size": 65536 00:13:30.119 }, 00:13:30.119 { 00:13:30.119 "name": "BaseBdev2", 00:13:30.119 "uuid": "03eefeab-03d4-5c89-a3aa-803f3202cd4a", 00:13:30.119 "is_configured": true, 00:13:30.119 "data_offset": 0, 00:13:30.119 "data_size": 65536 00:13:30.119 }, 00:13:30.119 { 00:13:30.119 "name": "BaseBdev3", 00:13:30.119 "uuid": "c2f7a303-a941-52b2-9eef-680056e66997", 00:13:30.119 "is_configured": true, 00:13:30.119 "data_offset": 0, 00:13:30.119 "data_size": 65536 00:13:30.119 }, 00:13:30.119 { 00:13:30.119 "name": "BaseBdev4", 00:13:30.119 "uuid": "a7cddb48-bbd0-5481-ba26-f1fd5252434d", 00:13:30.119 "is_configured": true, 00:13:30.119 "data_offset": 0, 00:13:30.119 "data_size": 65536 00:13:30.119 } 00:13:30.119 ] 00:13:30.119 }' 00:13:30.119 14:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.119 14:46:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.379 14:46:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:30.379 14:46:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:30.379 14:46:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.379 14:46:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.379 [2024-12-09 14:46:08.362721] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:30.379 14:46:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.379 14:46:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:30.379 14:46:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.379 14:46:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:30.379 14:46:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.379 14:46:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.379 14:46:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.379 14:46:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:30.379 14:46:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:30.379 14:46:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:30.379 14:46:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:30.379 14:46:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:30.379 14:46:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:30.379 14:46:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:30.379 14:46:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:30.379 14:46:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:30.379 14:46:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:30.379 14:46:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:30.380 14:46:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:30.380 14:46:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:30.380 14:46:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:30.639 [2024-12-09 14:46:08.657931] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:30.639 /dev/nbd0 00:13:30.639 14:46:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:30.639 14:46:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:30.639 14:46:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:30.639 14:46:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:30.639 14:46:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:30.639 14:46:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:30.639 14:46:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:30.639 14:46:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:30.639 14:46:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:30.639 14:46:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:30.639 14:46:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:30.639 1+0 records in 00:13:30.639 1+0 records out 00:13:30.639 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413759 s, 9.9 MB/s 00:13:30.639 14:46:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.639 14:46:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:30.639 14:46:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.639 14:46:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:30.639 14:46:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:30.639 14:46:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:30.640 14:46:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:30.640 14:46:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:30.640 14:46:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:30.640 14:46:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:37.207 65536+0 records in 00:13:37.207 65536+0 records out 00:13:37.207 33554432 bytes (34 MB, 32 MiB) copied, 5.77459 s, 5.8 MB/s 00:13:37.207 14:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:37.207 14:46:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:37.207 14:46:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:37.207 14:46:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:37.207 14:46:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:37.207 14:46:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:37.207 14:46:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:37.207 [2024-12-09 14:46:14.700360] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.207 14:46:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:37.207 14:46:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:37.207 14:46:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:37.207 14:46:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:37.207 14:46:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:37.207 14:46:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:37.207 14:46:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:37.207 14:46:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:37.207 14:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:37.207 14:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.207 14:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.207 [2024-12-09 14:46:14.736402] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:37.207 14:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.207 14:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:37.207 14:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.207 14:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.207 14:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.207 14:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.207 14:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:37.207 14:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.207 14:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.207 14:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.207 14:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.207 14:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.207 14:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.207 14:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.207 14:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.207 14:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.207 14:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.207 "name": "raid_bdev1", 00:13:37.207 "uuid": "a084cf82-6c88-4bf7-91e5-b94f70218474", 00:13:37.207 "strip_size_kb": 0, 00:13:37.207 "state": "online", 00:13:37.207 "raid_level": "raid1", 00:13:37.207 "superblock": false, 00:13:37.207 "num_base_bdevs": 4, 00:13:37.207 "num_base_bdevs_discovered": 3, 00:13:37.207 "num_base_bdevs_operational": 3, 00:13:37.207 "base_bdevs_list": [ 00:13:37.207 { 00:13:37.207 "name": null, 00:13:37.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.207 "is_configured": false, 00:13:37.207 "data_offset": 0, 00:13:37.207 "data_size": 65536 00:13:37.207 }, 00:13:37.207 { 00:13:37.207 "name": "BaseBdev2", 00:13:37.207 "uuid": "03eefeab-03d4-5c89-a3aa-803f3202cd4a", 00:13:37.207 "is_configured": true, 00:13:37.207 "data_offset": 0, 00:13:37.207 "data_size": 65536 00:13:37.207 }, 00:13:37.207 { 00:13:37.207 "name": "BaseBdev3", 00:13:37.207 "uuid": "c2f7a303-a941-52b2-9eef-680056e66997", 00:13:37.207 "is_configured": true, 00:13:37.207 "data_offset": 0, 00:13:37.207 "data_size": 65536 00:13:37.207 }, 00:13:37.207 { 00:13:37.207 "name": "BaseBdev4", 00:13:37.207 "uuid": "a7cddb48-bbd0-5481-ba26-f1fd5252434d", 00:13:37.207 "is_configured": true, 00:13:37.207 "data_offset": 0, 00:13:37.207 "data_size": 65536 00:13:37.207 } 00:13:37.207 ] 00:13:37.207 }' 00:13:37.207 14:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.207 14:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.207 14:46:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:37.207 14:46:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.207 14:46:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.207 [2024-12-09 14:46:15.211637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:37.207 [2024-12-09 14:46:15.227969] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:13:37.207 14:46:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.207 14:46:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:37.207 [2024-12-09 14:46:15.230042] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:38.141 14:46:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:38.141 14:46:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.141 14:46:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:38.141 14:46:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:38.141 14:46:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.141 14:46:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.141 14:46:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.141 14:46:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.141 14:46:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.401 14:46:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.401 14:46:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.401 "name": "raid_bdev1", 00:13:38.401 "uuid": "a084cf82-6c88-4bf7-91e5-b94f70218474", 00:13:38.401 "strip_size_kb": 0, 00:13:38.401 "state": "online", 00:13:38.401 "raid_level": "raid1", 00:13:38.401 "superblock": false, 00:13:38.401 "num_base_bdevs": 4, 00:13:38.401 "num_base_bdevs_discovered": 4, 00:13:38.401 "num_base_bdevs_operational": 4, 00:13:38.401 "process": { 00:13:38.401 "type": "rebuild", 00:13:38.401 "target": "spare", 00:13:38.401 "progress": { 00:13:38.401 "blocks": 20480, 00:13:38.401 "percent": 31 00:13:38.401 } 00:13:38.401 }, 00:13:38.401 "base_bdevs_list": [ 00:13:38.401 { 00:13:38.401 "name": "spare", 00:13:38.401 "uuid": "0d981eb2-56a9-56dc-88dd-528350312a24", 00:13:38.401 "is_configured": true, 00:13:38.401 "data_offset": 0, 00:13:38.401 "data_size": 65536 00:13:38.401 }, 00:13:38.401 { 00:13:38.401 "name": "BaseBdev2", 00:13:38.401 "uuid": "03eefeab-03d4-5c89-a3aa-803f3202cd4a", 00:13:38.401 "is_configured": true, 00:13:38.401 "data_offset": 0, 00:13:38.401 "data_size": 65536 00:13:38.401 }, 00:13:38.401 { 00:13:38.401 "name": "BaseBdev3", 00:13:38.401 "uuid": "c2f7a303-a941-52b2-9eef-680056e66997", 00:13:38.401 "is_configured": true, 00:13:38.401 "data_offset": 0, 00:13:38.401 "data_size": 65536 00:13:38.401 }, 00:13:38.401 { 00:13:38.401 "name": "BaseBdev4", 00:13:38.401 "uuid": "a7cddb48-bbd0-5481-ba26-f1fd5252434d", 00:13:38.401 "is_configured": true, 00:13:38.401 "data_offset": 0, 00:13:38.401 "data_size": 65536 00:13:38.401 } 00:13:38.401 ] 00:13:38.401 }' 00:13:38.401 14:46:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.401 14:46:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:38.401 14:46:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.401 14:46:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:38.401 14:46:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:38.401 14:46:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.401 14:46:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.401 [2024-12-09 14:46:16.389490] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:38.401 [2024-12-09 14:46:16.435858] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:38.401 [2024-12-09 14:46:16.435950] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.402 [2024-12-09 14:46:16.435970] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:38.402 [2024-12-09 14:46:16.435981] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:38.402 14:46:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.402 14:46:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:38.402 14:46:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:38.402 14:46:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:38.402 14:46:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.402 14:46:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.402 14:46:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:38.402 14:46:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.402 14:46:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.402 14:46:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.402 14:46:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.402 14:46:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.402 14:46:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.402 14:46:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.402 14:46:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.402 14:46:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.402 14:46:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.402 "name": "raid_bdev1", 00:13:38.402 "uuid": "a084cf82-6c88-4bf7-91e5-b94f70218474", 00:13:38.402 "strip_size_kb": 0, 00:13:38.402 "state": "online", 00:13:38.402 "raid_level": "raid1", 00:13:38.402 "superblock": false, 00:13:38.402 "num_base_bdevs": 4, 00:13:38.402 "num_base_bdevs_discovered": 3, 00:13:38.402 "num_base_bdevs_operational": 3, 00:13:38.402 "base_bdevs_list": [ 00:13:38.402 { 00:13:38.402 "name": null, 00:13:38.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.402 "is_configured": false, 00:13:38.402 "data_offset": 0, 00:13:38.402 "data_size": 65536 00:13:38.402 }, 00:13:38.402 { 00:13:38.402 "name": "BaseBdev2", 00:13:38.402 "uuid": "03eefeab-03d4-5c89-a3aa-803f3202cd4a", 00:13:38.402 "is_configured": true, 00:13:38.402 "data_offset": 0, 00:13:38.402 "data_size": 65536 00:13:38.402 }, 00:13:38.402 { 00:13:38.402 "name": "BaseBdev3", 00:13:38.402 "uuid": "c2f7a303-a941-52b2-9eef-680056e66997", 00:13:38.402 "is_configured": true, 00:13:38.402 "data_offset": 0, 00:13:38.402 "data_size": 65536 00:13:38.402 }, 00:13:38.402 { 00:13:38.402 "name": "BaseBdev4", 00:13:38.402 "uuid": "a7cddb48-bbd0-5481-ba26-f1fd5252434d", 00:13:38.402 "is_configured": true, 00:13:38.402 "data_offset": 0, 00:13:38.402 "data_size": 65536 00:13:38.402 } 00:13:38.402 ] 00:13:38.402 }' 00:13:38.402 14:46:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.402 14:46:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.973 14:46:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:38.973 14:46:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.973 14:46:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:38.973 14:46:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:38.973 14:46:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.973 14:46:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.973 14:46:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.973 14:46:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.973 14:46:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.973 14:46:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.973 14:46:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.973 "name": "raid_bdev1", 00:13:38.973 "uuid": "a084cf82-6c88-4bf7-91e5-b94f70218474", 00:13:38.973 "strip_size_kb": 0, 00:13:38.973 "state": "online", 00:13:38.973 "raid_level": "raid1", 00:13:38.973 "superblock": false, 00:13:38.973 "num_base_bdevs": 4, 00:13:38.973 "num_base_bdevs_discovered": 3, 00:13:38.973 "num_base_bdevs_operational": 3, 00:13:38.973 "base_bdevs_list": [ 00:13:38.973 { 00:13:38.973 "name": null, 00:13:38.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.973 "is_configured": false, 00:13:38.973 "data_offset": 0, 00:13:38.973 "data_size": 65536 00:13:38.973 }, 00:13:38.973 { 00:13:38.973 "name": "BaseBdev2", 00:13:38.973 "uuid": "03eefeab-03d4-5c89-a3aa-803f3202cd4a", 00:13:38.973 "is_configured": true, 00:13:38.973 "data_offset": 0, 00:13:38.973 "data_size": 65536 00:13:38.973 }, 00:13:38.973 { 00:13:38.973 "name": "BaseBdev3", 00:13:38.973 "uuid": "c2f7a303-a941-52b2-9eef-680056e66997", 00:13:38.973 "is_configured": true, 00:13:38.973 "data_offset": 0, 00:13:38.973 "data_size": 65536 00:13:38.973 }, 00:13:38.973 { 00:13:38.973 "name": "BaseBdev4", 00:13:38.973 "uuid": "a7cddb48-bbd0-5481-ba26-f1fd5252434d", 00:13:38.973 "is_configured": true, 00:13:38.973 "data_offset": 0, 00:13:38.973 "data_size": 65536 00:13:38.973 } 00:13:38.973 ] 00:13:38.973 }' 00:13:38.973 14:46:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.973 14:46:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:38.973 14:46:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.973 14:46:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:38.973 14:46:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:38.973 14:46:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.973 14:46:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.973 [2024-12-09 14:46:17.077547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:38.973 [2024-12-09 14:46:17.092306] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:13:38.973 14:46:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.973 14:46:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:38.973 [2024-12-09 14:46:17.094351] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:40.354 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:40.354 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.354 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:40.354 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:40.354 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.354 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.354 14:46:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.354 14:46:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.354 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.354 14:46:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.354 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.354 "name": "raid_bdev1", 00:13:40.354 "uuid": "a084cf82-6c88-4bf7-91e5-b94f70218474", 00:13:40.354 "strip_size_kb": 0, 00:13:40.354 "state": "online", 00:13:40.354 "raid_level": "raid1", 00:13:40.354 "superblock": false, 00:13:40.354 "num_base_bdevs": 4, 00:13:40.354 "num_base_bdevs_discovered": 4, 00:13:40.354 "num_base_bdevs_operational": 4, 00:13:40.354 "process": { 00:13:40.354 "type": "rebuild", 00:13:40.354 "target": "spare", 00:13:40.354 "progress": { 00:13:40.354 "blocks": 20480, 00:13:40.354 "percent": 31 00:13:40.354 } 00:13:40.354 }, 00:13:40.354 "base_bdevs_list": [ 00:13:40.354 { 00:13:40.354 "name": "spare", 00:13:40.354 "uuid": "0d981eb2-56a9-56dc-88dd-528350312a24", 00:13:40.354 "is_configured": true, 00:13:40.354 "data_offset": 0, 00:13:40.354 "data_size": 65536 00:13:40.354 }, 00:13:40.354 { 00:13:40.354 "name": "BaseBdev2", 00:13:40.354 "uuid": "03eefeab-03d4-5c89-a3aa-803f3202cd4a", 00:13:40.354 "is_configured": true, 00:13:40.354 "data_offset": 0, 00:13:40.354 "data_size": 65536 00:13:40.354 }, 00:13:40.354 { 00:13:40.354 "name": "BaseBdev3", 00:13:40.354 "uuid": "c2f7a303-a941-52b2-9eef-680056e66997", 00:13:40.354 "is_configured": true, 00:13:40.354 "data_offset": 0, 00:13:40.354 "data_size": 65536 00:13:40.354 }, 00:13:40.354 { 00:13:40.354 "name": "BaseBdev4", 00:13:40.354 "uuid": "a7cddb48-bbd0-5481-ba26-f1fd5252434d", 00:13:40.354 "is_configured": true, 00:13:40.354 "data_offset": 0, 00:13:40.354 "data_size": 65536 00:13:40.354 } 00:13:40.354 ] 00:13:40.354 }' 00:13:40.354 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.354 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:40.354 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.354 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:40.354 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:40.354 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:40.354 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:40.354 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:40.354 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:40.354 14:46:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.354 14:46:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.354 [2024-12-09 14:46:18.241878] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:40.354 [2024-12-09 14:46:18.300225] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:13:40.354 14:46:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.354 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:40.354 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:40.354 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:40.354 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.354 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:40.355 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:40.355 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.355 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.355 14:46:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.355 14:46:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.355 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.355 14:46:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.355 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.355 "name": "raid_bdev1", 00:13:40.355 "uuid": "a084cf82-6c88-4bf7-91e5-b94f70218474", 00:13:40.355 "strip_size_kb": 0, 00:13:40.355 "state": "online", 00:13:40.355 "raid_level": "raid1", 00:13:40.355 "superblock": false, 00:13:40.355 "num_base_bdevs": 4, 00:13:40.355 "num_base_bdevs_discovered": 3, 00:13:40.355 "num_base_bdevs_operational": 3, 00:13:40.355 "process": { 00:13:40.355 "type": "rebuild", 00:13:40.355 "target": "spare", 00:13:40.355 "progress": { 00:13:40.355 "blocks": 24576, 00:13:40.355 "percent": 37 00:13:40.355 } 00:13:40.355 }, 00:13:40.355 "base_bdevs_list": [ 00:13:40.355 { 00:13:40.355 "name": "spare", 00:13:40.355 "uuid": "0d981eb2-56a9-56dc-88dd-528350312a24", 00:13:40.355 "is_configured": true, 00:13:40.355 "data_offset": 0, 00:13:40.355 "data_size": 65536 00:13:40.355 }, 00:13:40.355 { 00:13:40.355 "name": null, 00:13:40.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.355 "is_configured": false, 00:13:40.355 "data_offset": 0, 00:13:40.355 "data_size": 65536 00:13:40.355 }, 00:13:40.355 { 00:13:40.355 "name": "BaseBdev3", 00:13:40.355 "uuid": "c2f7a303-a941-52b2-9eef-680056e66997", 00:13:40.355 "is_configured": true, 00:13:40.355 "data_offset": 0, 00:13:40.355 "data_size": 65536 00:13:40.355 }, 00:13:40.355 { 00:13:40.355 "name": "BaseBdev4", 00:13:40.355 "uuid": "a7cddb48-bbd0-5481-ba26-f1fd5252434d", 00:13:40.355 "is_configured": true, 00:13:40.355 "data_offset": 0, 00:13:40.355 "data_size": 65536 00:13:40.355 } 00:13:40.355 ] 00:13:40.355 }' 00:13:40.355 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.355 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:40.355 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.355 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:40.355 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=450 00:13:40.355 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:40.355 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:40.355 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.355 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:40.355 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:40.355 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.355 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.355 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.355 14:46:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.355 14:46:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.614 14:46:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.614 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.614 "name": "raid_bdev1", 00:13:40.614 "uuid": "a084cf82-6c88-4bf7-91e5-b94f70218474", 00:13:40.614 "strip_size_kb": 0, 00:13:40.614 "state": "online", 00:13:40.614 "raid_level": "raid1", 00:13:40.614 "superblock": false, 00:13:40.614 "num_base_bdevs": 4, 00:13:40.614 "num_base_bdevs_discovered": 3, 00:13:40.614 "num_base_bdevs_operational": 3, 00:13:40.614 "process": { 00:13:40.614 "type": "rebuild", 00:13:40.614 "target": "spare", 00:13:40.614 "progress": { 00:13:40.614 "blocks": 26624, 00:13:40.614 "percent": 40 00:13:40.614 } 00:13:40.614 }, 00:13:40.614 "base_bdevs_list": [ 00:13:40.614 { 00:13:40.614 "name": "spare", 00:13:40.614 "uuid": "0d981eb2-56a9-56dc-88dd-528350312a24", 00:13:40.614 "is_configured": true, 00:13:40.614 "data_offset": 0, 00:13:40.614 "data_size": 65536 00:13:40.614 }, 00:13:40.614 { 00:13:40.614 "name": null, 00:13:40.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.614 "is_configured": false, 00:13:40.614 "data_offset": 0, 00:13:40.614 "data_size": 65536 00:13:40.614 }, 00:13:40.614 { 00:13:40.615 "name": "BaseBdev3", 00:13:40.615 "uuid": "c2f7a303-a941-52b2-9eef-680056e66997", 00:13:40.615 "is_configured": true, 00:13:40.615 "data_offset": 0, 00:13:40.615 "data_size": 65536 00:13:40.615 }, 00:13:40.615 { 00:13:40.615 "name": "BaseBdev4", 00:13:40.615 "uuid": "a7cddb48-bbd0-5481-ba26-f1fd5252434d", 00:13:40.615 "is_configured": true, 00:13:40.615 "data_offset": 0, 00:13:40.615 "data_size": 65536 00:13:40.615 } 00:13:40.615 ] 00:13:40.615 }' 00:13:40.615 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.615 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:40.615 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.615 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:40.615 14:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:41.554 14:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:41.554 14:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:41.554 14:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.554 14:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:41.554 14:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:41.554 14:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.554 14:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.554 14:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.554 14:46:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.554 14:46:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.554 14:46:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.554 14:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.554 "name": "raid_bdev1", 00:13:41.554 "uuid": "a084cf82-6c88-4bf7-91e5-b94f70218474", 00:13:41.554 "strip_size_kb": 0, 00:13:41.554 "state": "online", 00:13:41.554 "raid_level": "raid1", 00:13:41.554 "superblock": false, 00:13:41.554 "num_base_bdevs": 4, 00:13:41.554 "num_base_bdevs_discovered": 3, 00:13:41.554 "num_base_bdevs_operational": 3, 00:13:41.554 "process": { 00:13:41.554 "type": "rebuild", 00:13:41.554 "target": "spare", 00:13:41.554 "progress": { 00:13:41.554 "blocks": 49152, 00:13:41.554 "percent": 75 00:13:41.554 } 00:13:41.554 }, 00:13:41.554 "base_bdevs_list": [ 00:13:41.554 { 00:13:41.554 "name": "spare", 00:13:41.554 "uuid": "0d981eb2-56a9-56dc-88dd-528350312a24", 00:13:41.554 "is_configured": true, 00:13:41.554 "data_offset": 0, 00:13:41.554 "data_size": 65536 00:13:41.554 }, 00:13:41.554 { 00:13:41.554 "name": null, 00:13:41.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.554 "is_configured": false, 00:13:41.554 "data_offset": 0, 00:13:41.554 "data_size": 65536 00:13:41.554 }, 00:13:41.554 { 00:13:41.554 "name": "BaseBdev3", 00:13:41.554 "uuid": "c2f7a303-a941-52b2-9eef-680056e66997", 00:13:41.554 "is_configured": true, 00:13:41.554 "data_offset": 0, 00:13:41.554 "data_size": 65536 00:13:41.554 }, 00:13:41.554 { 00:13:41.554 "name": "BaseBdev4", 00:13:41.554 "uuid": "a7cddb48-bbd0-5481-ba26-f1fd5252434d", 00:13:41.554 "is_configured": true, 00:13:41.554 "data_offset": 0, 00:13:41.554 "data_size": 65536 00:13:41.554 } 00:13:41.554 ] 00:13:41.554 }' 00:13:41.554 14:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.813 14:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:41.813 14:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.813 14:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:41.814 14:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:42.382 [2024-12-09 14:46:20.310063] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:42.382 [2024-12-09 14:46:20.310164] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:42.382 [2024-12-09 14:46:20.310213] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.642 14:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:42.642 14:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:42.642 14:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.642 14:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:42.642 14:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:42.642 14:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.642 14:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.642 14:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.642 14:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.642 14:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.901 14:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.901 14:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.901 "name": "raid_bdev1", 00:13:42.901 "uuid": "a084cf82-6c88-4bf7-91e5-b94f70218474", 00:13:42.901 "strip_size_kb": 0, 00:13:42.901 "state": "online", 00:13:42.901 "raid_level": "raid1", 00:13:42.901 "superblock": false, 00:13:42.901 "num_base_bdevs": 4, 00:13:42.901 "num_base_bdevs_discovered": 3, 00:13:42.901 "num_base_bdevs_operational": 3, 00:13:42.901 "base_bdevs_list": [ 00:13:42.901 { 00:13:42.901 "name": "spare", 00:13:42.901 "uuid": "0d981eb2-56a9-56dc-88dd-528350312a24", 00:13:42.901 "is_configured": true, 00:13:42.901 "data_offset": 0, 00:13:42.901 "data_size": 65536 00:13:42.901 }, 00:13:42.901 { 00:13:42.901 "name": null, 00:13:42.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.901 "is_configured": false, 00:13:42.901 "data_offset": 0, 00:13:42.901 "data_size": 65536 00:13:42.901 }, 00:13:42.901 { 00:13:42.901 "name": "BaseBdev3", 00:13:42.901 "uuid": "c2f7a303-a941-52b2-9eef-680056e66997", 00:13:42.901 "is_configured": true, 00:13:42.901 "data_offset": 0, 00:13:42.901 "data_size": 65536 00:13:42.901 }, 00:13:42.901 { 00:13:42.901 "name": "BaseBdev4", 00:13:42.901 "uuid": "a7cddb48-bbd0-5481-ba26-f1fd5252434d", 00:13:42.901 "is_configured": true, 00:13:42.901 "data_offset": 0, 00:13:42.901 "data_size": 65536 00:13:42.901 } 00:13:42.901 ] 00:13:42.901 }' 00:13:42.901 14:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.901 14:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:42.901 14:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.901 14:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:42.901 14:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:42.901 14:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:42.901 14:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.901 14:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:42.901 14:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:42.901 14:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.901 14:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.901 14:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.901 14:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.901 14:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.901 14:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.901 14:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.901 "name": "raid_bdev1", 00:13:42.901 "uuid": "a084cf82-6c88-4bf7-91e5-b94f70218474", 00:13:42.901 "strip_size_kb": 0, 00:13:42.901 "state": "online", 00:13:42.901 "raid_level": "raid1", 00:13:42.901 "superblock": false, 00:13:42.901 "num_base_bdevs": 4, 00:13:42.901 "num_base_bdevs_discovered": 3, 00:13:42.901 "num_base_bdevs_operational": 3, 00:13:42.902 "base_bdevs_list": [ 00:13:42.902 { 00:13:42.902 "name": "spare", 00:13:42.902 "uuid": "0d981eb2-56a9-56dc-88dd-528350312a24", 00:13:42.902 "is_configured": true, 00:13:42.902 "data_offset": 0, 00:13:42.902 "data_size": 65536 00:13:42.902 }, 00:13:42.902 { 00:13:42.902 "name": null, 00:13:42.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.902 "is_configured": false, 00:13:42.902 "data_offset": 0, 00:13:42.902 "data_size": 65536 00:13:42.902 }, 00:13:42.902 { 00:13:42.902 "name": "BaseBdev3", 00:13:42.902 "uuid": "c2f7a303-a941-52b2-9eef-680056e66997", 00:13:42.902 "is_configured": true, 00:13:42.902 "data_offset": 0, 00:13:42.902 "data_size": 65536 00:13:42.902 }, 00:13:42.902 { 00:13:42.902 "name": "BaseBdev4", 00:13:42.902 "uuid": "a7cddb48-bbd0-5481-ba26-f1fd5252434d", 00:13:42.902 "is_configured": true, 00:13:42.902 "data_offset": 0, 00:13:42.902 "data_size": 65536 00:13:42.902 } 00:13:42.902 ] 00:13:42.902 }' 00:13:42.902 14:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.902 14:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:42.902 14:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.162 14:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:43.162 14:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:43.162 14:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.162 14:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.162 14:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.162 14:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.162 14:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:43.162 14:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.162 14:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.162 14:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.162 14:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.162 14:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.162 14:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.162 14:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.162 14:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.162 14:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.162 14:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.162 "name": "raid_bdev1", 00:13:43.162 "uuid": "a084cf82-6c88-4bf7-91e5-b94f70218474", 00:13:43.162 "strip_size_kb": 0, 00:13:43.162 "state": "online", 00:13:43.162 "raid_level": "raid1", 00:13:43.162 "superblock": false, 00:13:43.162 "num_base_bdevs": 4, 00:13:43.162 "num_base_bdevs_discovered": 3, 00:13:43.162 "num_base_bdevs_operational": 3, 00:13:43.162 "base_bdevs_list": [ 00:13:43.162 { 00:13:43.162 "name": "spare", 00:13:43.162 "uuid": "0d981eb2-56a9-56dc-88dd-528350312a24", 00:13:43.162 "is_configured": true, 00:13:43.162 "data_offset": 0, 00:13:43.162 "data_size": 65536 00:13:43.162 }, 00:13:43.162 { 00:13:43.162 "name": null, 00:13:43.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.162 "is_configured": false, 00:13:43.162 "data_offset": 0, 00:13:43.162 "data_size": 65536 00:13:43.162 }, 00:13:43.162 { 00:13:43.162 "name": "BaseBdev3", 00:13:43.162 "uuid": "c2f7a303-a941-52b2-9eef-680056e66997", 00:13:43.162 "is_configured": true, 00:13:43.162 "data_offset": 0, 00:13:43.162 "data_size": 65536 00:13:43.162 }, 00:13:43.162 { 00:13:43.162 "name": "BaseBdev4", 00:13:43.162 "uuid": "a7cddb48-bbd0-5481-ba26-f1fd5252434d", 00:13:43.162 "is_configured": true, 00:13:43.162 "data_offset": 0, 00:13:43.162 "data_size": 65536 00:13:43.162 } 00:13:43.162 ] 00:13:43.162 }' 00:13:43.162 14:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.162 14:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.421 14:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:43.421 14:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.421 14:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.421 [2024-12-09 14:46:21.444012] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:43.421 [2024-12-09 14:46:21.444110] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:43.421 [2024-12-09 14:46:21.444244] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:43.421 [2024-12-09 14:46:21.444365] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:43.421 [2024-12-09 14:46:21.444421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:43.421 14:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.421 14:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.421 14:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:43.421 14:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.421 14:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.421 14:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.421 14:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:43.421 14:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:43.421 14:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:43.421 14:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:43.421 14:46:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:43.421 14:46:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:43.421 14:46:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:43.421 14:46:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:43.421 14:46:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:43.421 14:46:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:43.421 14:46:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:43.421 14:46:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:43.421 14:46:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:43.681 /dev/nbd0 00:13:43.681 14:46:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:43.681 14:46:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:43.681 14:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:43.681 14:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:43.681 14:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:43.681 14:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:43.681 14:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:43.681 14:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:43.681 14:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:43.681 14:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:43.681 14:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:43.681 1+0 records in 00:13:43.681 1+0 records out 00:13:43.681 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398998 s, 10.3 MB/s 00:13:43.681 14:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.681 14:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:43.681 14:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.681 14:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:43.681 14:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:43.681 14:46:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:43.681 14:46:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:43.681 14:46:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:43.941 /dev/nbd1 00:13:43.941 14:46:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:43.941 14:46:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:43.941 14:46:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:43.941 14:46:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:43.941 14:46:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:43.941 14:46:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:43.941 14:46:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:43.941 14:46:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:43.941 14:46:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:43.941 14:46:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:43.941 14:46:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:43.941 1+0 records in 00:13:43.941 1+0 records out 00:13:43.941 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419001 s, 9.8 MB/s 00:13:43.941 14:46:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.941 14:46:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:43.941 14:46:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.941 14:46:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:43.941 14:46:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:43.941 14:46:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:43.941 14:46:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:43.941 14:46:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:44.200 14:46:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:44.200 14:46:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:44.200 14:46:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:44.200 14:46:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:44.200 14:46:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:44.200 14:46:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:44.200 14:46:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:44.460 14:46:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:44.460 14:46:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:44.460 14:46:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:44.460 14:46:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:44.460 14:46:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:44.460 14:46:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:44.460 14:46:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:44.460 14:46:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:44.460 14:46:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:44.460 14:46:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:44.720 14:46:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:44.720 14:46:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:44.720 14:46:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:44.720 14:46:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:44.720 14:46:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:44.720 14:46:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:44.720 14:46:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:44.720 14:46:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:44.720 14:46:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:44.720 14:46:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 78851 00:13:44.720 14:46:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 78851 ']' 00:13:44.720 14:46:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 78851 00:13:44.720 14:46:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:13:44.720 14:46:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:44.720 14:46:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78851 00:13:44.720 14:46:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:44.720 14:46:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:44.720 14:46:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78851' 00:13:44.720 killing process with pid 78851 00:13:44.720 14:46:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 78851 00:13:44.720 Received shutdown signal, test time was about 60.000000 seconds 00:13:44.720 00:13:44.720 Latency(us) 00:13:44.720 [2024-12-09T14:46:22.842Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.720 [2024-12-09T14:46:22.842Z] =================================================================================================================== 00:13:44.720 [2024-12-09T14:46:22.842Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:44.720 [2024-12-09 14:46:22.741707] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:44.720 14:46:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 78851 00:13:45.289 [2024-12-09 14:46:23.243378] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:46.671 ************************************ 00:13:46.671 END TEST raid_rebuild_test 00:13:46.671 ************************************ 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:46.671 00:13:46.671 real 0m17.693s 00:13:46.671 user 0m19.805s 00:13:46.671 sys 0m3.007s 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.671 14:46:24 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:13:46.671 14:46:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:46.671 14:46:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:46.671 14:46:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:46.671 ************************************ 00:13:46.671 START TEST raid_rebuild_test_sb 00:13:46.671 ************************************ 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=79304 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 79304 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 79304 ']' 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:46.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:46.671 14:46:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.671 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:46.671 Zero copy mechanism will not be used. 00:13:46.671 [2024-12-09 14:46:24.562939] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:13:46.671 [2024-12-09 14:46:24.563053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79304 ] 00:13:46.671 [2024-12-09 14:46:24.738625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.931 [2024-12-09 14:46:24.861582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.189 [2024-12-09 14:46:25.071756] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:47.189 [2024-12-09 14:46:25.071854] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:47.449 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:47.449 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:47.449 14:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:47.449 14:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:47.449 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.449 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.449 BaseBdev1_malloc 00:13:47.449 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.449 14:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:47.449 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.449 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.449 [2024-12-09 14:46:25.455758] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:47.449 [2024-12-09 14:46:25.455822] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.449 [2024-12-09 14:46:25.455846] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:47.449 [2024-12-09 14:46:25.455858] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.449 [2024-12-09 14:46:25.458116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.449 [2024-12-09 14:46:25.458237] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:47.449 BaseBdev1 00:13:47.449 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.449 14:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:47.449 14:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:47.449 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.449 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.449 BaseBdev2_malloc 00:13:47.449 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.449 14:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:47.449 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.449 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.449 [2024-12-09 14:46:25.510959] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:47.449 [2024-12-09 14:46:25.511034] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.449 [2024-12-09 14:46:25.511060] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:47.449 [2024-12-09 14:46:25.511071] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.449 [2024-12-09 14:46:25.513471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.449 [2024-12-09 14:46:25.513514] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:47.449 BaseBdev2 00:13:47.449 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.449 14:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:47.449 14:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:47.449 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.449 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.709 BaseBdev3_malloc 00:13:47.709 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.709 14:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:47.709 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.709 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.709 [2024-12-09 14:46:25.583882] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:47.709 [2024-12-09 14:46:25.583937] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.709 [2024-12-09 14:46:25.583958] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:47.709 [2024-12-09 14:46:25.583969] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.709 [2024-12-09 14:46:25.586057] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.709 [2024-12-09 14:46:25.586097] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:47.709 BaseBdev3 00:13:47.709 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.709 14:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:47.709 14:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:47.709 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.709 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.709 BaseBdev4_malloc 00:13:47.709 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.709 14:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:47.709 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.709 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.709 [2024-12-09 14:46:25.639237] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:47.709 [2024-12-09 14:46:25.639365] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.709 [2024-12-09 14:46:25.639394] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:47.709 [2024-12-09 14:46:25.639405] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.709 [2024-12-09 14:46:25.641469] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.709 [2024-12-09 14:46:25.641524] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:47.709 BaseBdev4 00:13:47.709 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.709 14:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:47.709 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.709 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.709 spare_malloc 00:13:47.709 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.709 14:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:47.709 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.709 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.709 spare_delay 00:13:47.709 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.709 14:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:47.709 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.709 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.710 [2024-12-09 14:46:25.706916] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:47.710 [2024-12-09 14:46:25.706971] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.710 [2024-12-09 14:46:25.706987] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:47.710 [2024-12-09 14:46:25.706998] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.710 [2024-12-09 14:46:25.709137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.710 [2024-12-09 14:46:25.709240] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:47.710 spare 00:13:47.710 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.710 14:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:47.710 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.710 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.710 [2024-12-09 14:46:25.718940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:47.710 [2024-12-09 14:46:25.720753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:47.710 [2024-12-09 14:46:25.720818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:47.710 [2024-12-09 14:46:25.720870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:47.710 [2024-12-09 14:46:25.721050] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:47.710 [2024-12-09 14:46:25.721065] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:47.710 [2024-12-09 14:46:25.721315] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:47.710 [2024-12-09 14:46:25.721483] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:47.710 [2024-12-09 14:46:25.721494] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:47.710 [2024-12-09 14:46:25.721649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.710 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.710 14:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:47.710 14:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.710 14:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.710 14:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:47.710 14:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:47.710 14:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:47.710 14:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.710 14:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.710 14:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.710 14:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.710 14:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.710 14:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.710 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.710 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.710 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.710 14:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.710 "name": "raid_bdev1", 00:13:47.710 "uuid": "363c8b0a-a75a-4086-8528-fea2db69a8b2", 00:13:47.710 "strip_size_kb": 0, 00:13:47.710 "state": "online", 00:13:47.710 "raid_level": "raid1", 00:13:47.710 "superblock": true, 00:13:47.710 "num_base_bdevs": 4, 00:13:47.710 "num_base_bdevs_discovered": 4, 00:13:47.710 "num_base_bdevs_operational": 4, 00:13:47.710 "base_bdevs_list": [ 00:13:47.710 { 00:13:47.710 "name": "BaseBdev1", 00:13:47.710 "uuid": "499e4d74-3192-5d74-baed-57caeb136512", 00:13:47.710 "is_configured": true, 00:13:47.710 "data_offset": 2048, 00:13:47.710 "data_size": 63488 00:13:47.710 }, 00:13:47.710 { 00:13:47.710 "name": "BaseBdev2", 00:13:47.710 "uuid": "9adebf04-6910-5886-8345-b4130ad0a82e", 00:13:47.710 "is_configured": true, 00:13:47.710 "data_offset": 2048, 00:13:47.710 "data_size": 63488 00:13:47.710 }, 00:13:47.710 { 00:13:47.710 "name": "BaseBdev3", 00:13:47.710 "uuid": "bd28538d-522a-5c72-b137-26523eac7f7e", 00:13:47.710 "is_configured": true, 00:13:47.710 "data_offset": 2048, 00:13:47.710 "data_size": 63488 00:13:47.710 }, 00:13:47.710 { 00:13:47.710 "name": "BaseBdev4", 00:13:47.710 "uuid": "ba111a2d-af5d-5883-9cf9-b50789a4f0b9", 00:13:47.710 "is_configured": true, 00:13:47.710 "data_offset": 2048, 00:13:47.710 "data_size": 63488 00:13:47.710 } 00:13:47.710 ] 00:13:47.710 }' 00:13:47.710 14:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.710 14:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.278 14:46:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:48.278 14:46:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.278 14:46:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.278 14:46:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:48.278 [2024-12-09 14:46:26.178626] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:48.278 14:46:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.278 14:46:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:48.278 14:46:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.278 14:46:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:48.278 14:46:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.278 14:46:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.278 14:46:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.278 14:46:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:48.278 14:46:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:48.278 14:46:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:48.278 14:46:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:48.278 14:46:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:48.278 14:46:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:48.278 14:46:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:48.278 14:46:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:48.278 14:46:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:48.278 14:46:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:48.278 14:46:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:48.278 14:46:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:48.278 14:46:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:48.278 14:46:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:48.558 [2024-12-09 14:46:26.461867] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:48.558 /dev/nbd0 00:13:48.558 14:46:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:48.558 14:46:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:48.558 14:46:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:48.558 14:46:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:48.558 14:46:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:48.558 14:46:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:48.558 14:46:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:48.558 14:46:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:48.558 14:46:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:48.558 14:46:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:48.558 14:46:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:48.558 1+0 records in 00:13:48.558 1+0 records out 00:13:48.558 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282232 s, 14.5 MB/s 00:13:48.558 14:46:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.558 14:46:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:48.558 14:46:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.558 14:46:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:48.558 14:46:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:48.558 14:46:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:48.558 14:46:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:48.558 14:46:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:48.558 14:46:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:48.558 14:46:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:55.172 63488+0 records in 00:13:55.172 63488+0 records out 00:13:55.172 32505856 bytes (33 MB, 31 MiB) copied, 5.59301 s, 5.8 MB/s 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:55.172 [2024-12-09 14:46:32.349924] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.172 [2024-12-09 14:46:32.366010] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.172 "name": "raid_bdev1", 00:13:55.172 "uuid": "363c8b0a-a75a-4086-8528-fea2db69a8b2", 00:13:55.172 "strip_size_kb": 0, 00:13:55.172 "state": "online", 00:13:55.172 "raid_level": "raid1", 00:13:55.172 "superblock": true, 00:13:55.172 "num_base_bdevs": 4, 00:13:55.172 "num_base_bdevs_discovered": 3, 00:13:55.172 "num_base_bdevs_operational": 3, 00:13:55.172 "base_bdevs_list": [ 00:13:55.172 { 00:13:55.172 "name": null, 00:13:55.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.172 "is_configured": false, 00:13:55.172 "data_offset": 0, 00:13:55.172 "data_size": 63488 00:13:55.172 }, 00:13:55.172 { 00:13:55.172 "name": "BaseBdev2", 00:13:55.172 "uuid": "9adebf04-6910-5886-8345-b4130ad0a82e", 00:13:55.172 "is_configured": true, 00:13:55.172 "data_offset": 2048, 00:13:55.172 "data_size": 63488 00:13:55.172 }, 00:13:55.172 { 00:13:55.172 "name": "BaseBdev3", 00:13:55.172 "uuid": "bd28538d-522a-5c72-b137-26523eac7f7e", 00:13:55.172 "is_configured": true, 00:13:55.172 "data_offset": 2048, 00:13:55.172 "data_size": 63488 00:13:55.172 }, 00:13:55.172 { 00:13:55.172 "name": "BaseBdev4", 00:13:55.172 "uuid": "ba111a2d-af5d-5883-9cf9-b50789a4f0b9", 00:13:55.172 "is_configured": true, 00:13:55.172 "data_offset": 2048, 00:13:55.172 "data_size": 63488 00:13:55.172 } 00:13:55.172 ] 00:13:55.172 }' 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.172 [2024-12-09 14:46:32.833221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:55.172 [2024-12-09 14:46:32.848438] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.172 14:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:55.172 [2024-12-09 14:46:32.850563] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:55.739 14:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:55.739 14:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.739 14:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:55.739 14:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:55.739 14:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.999 14:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.999 14:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.999 14:46:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.999 14:46:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.999 14:46:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.999 14:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:55.999 "name": "raid_bdev1", 00:13:55.999 "uuid": "363c8b0a-a75a-4086-8528-fea2db69a8b2", 00:13:55.999 "strip_size_kb": 0, 00:13:55.999 "state": "online", 00:13:55.999 "raid_level": "raid1", 00:13:55.999 "superblock": true, 00:13:55.999 "num_base_bdevs": 4, 00:13:55.999 "num_base_bdevs_discovered": 4, 00:13:55.999 "num_base_bdevs_operational": 4, 00:13:55.999 "process": { 00:13:55.999 "type": "rebuild", 00:13:55.999 "target": "spare", 00:13:55.999 "progress": { 00:13:55.999 "blocks": 20480, 00:13:55.999 "percent": 32 00:13:55.999 } 00:13:55.999 }, 00:13:55.999 "base_bdevs_list": [ 00:13:55.999 { 00:13:55.999 "name": "spare", 00:13:55.999 "uuid": "1e959ffc-82ab-5625-b8e1-f50877fc79f1", 00:13:55.999 "is_configured": true, 00:13:55.999 "data_offset": 2048, 00:13:55.999 "data_size": 63488 00:13:55.999 }, 00:13:55.999 { 00:13:55.999 "name": "BaseBdev2", 00:13:55.999 "uuid": "9adebf04-6910-5886-8345-b4130ad0a82e", 00:13:55.999 "is_configured": true, 00:13:55.999 "data_offset": 2048, 00:13:55.999 "data_size": 63488 00:13:55.999 }, 00:13:55.999 { 00:13:55.999 "name": "BaseBdev3", 00:13:55.999 "uuid": "bd28538d-522a-5c72-b137-26523eac7f7e", 00:13:55.999 "is_configured": true, 00:13:55.999 "data_offset": 2048, 00:13:55.999 "data_size": 63488 00:13:56.000 }, 00:13:56.000 { 00:13:56.000 "name": "BaseBdev4", 00:13:56.000 "uuid": "ba111a2d-af5d-5883-9cf9-b50789a4f0b9", 00:13:56.000 "is_configured": true, 00:13:56.000 "data_offset": 2048, 00:13:56.000 "data_size": 63488 00:13:56.000 } 00:13:56.000 ] 00:13:56.000 }' 00:13:56.000 14:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.000 14:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:56.000 14:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.000 14:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:56.000 14:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:56.000 14:46:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.000 14:46:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.000 [2024-12-09 14:46:33.995369] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:56.000 [2024-12-09 14:46:34.056472] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:56.000 [2024-12-09 14:46:34.056565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.000 [2024-12-09 14:46:34.056597] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:56.000 [2024-12-09 14:46:34.056608] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:56.000 14:46:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.000 14:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:56.000 14:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.000 14:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.000 14:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.000 14:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.000 14:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:56.000 14:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.000 14:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.000 14:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.000 14:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.000 14:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.000 14:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.000 14:46:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.000 14:46:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.000 14:46:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.259 14:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.259 "name": "raid_bdev1", 00:13:56.259 "uuid": "363c8b0a-a75a-4086-8528-fea2db69a8b2", 00:13:56.259 "strip_size_kb": 0, 00:13:56.259 "state": "online", 00:13:56.259 "raid_level": "raid1", 00:13:56.259 "superblock": true, 00:13:56.259 "num_base_bdevs": 4, 00:13:56.259 "num_base_bdevs_discovered": 3, 00:13:56.259 "num_base_bdevs_operational": 3, 00:13:56.259 "base_bdevs_list": [ 00:13:56.259 { 00:13:56.259 "name": null, 00:13:56.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.259 "is_configured": false, 00:13:56.259 "data_offset": 0, 00:13:56.259 "data_size": 63488 00:13:56.259 }, 00:13:56.259 { 00:13:56.259 "name": "BaseBdev2", 00:13:56.259 "uuid": "9adebf04-6910-5886-8345-b4130ad0a82e", 00:13:56.259 "is_configured": true, 00:13:56.259 "data_offset": 2048, 00:13:56.259 "data_size": 63488 00:13:56.259 }, 00:13:56.259 { 00:13:56.259 "name": "BaseBdev3", 00:13:56.259 "uuid": "bd28538d-522a-5c72-b137-26523eac7f7e", 00:13:56.259 "is_configured": true, 00:13:56.259 "data_offset": 2048, 00:13:56.259 "data_size": 63488 00:13:56.259 }, 00:13:56.259 { 00:13:56.259 "name": "BaseBdev4", 00:13:56.259 "uuid": "ba111a2d-af5d-5883-9cf9-b50789a4f0b9", 00:13:56.259 "is_configured": true, 00:13:56.259 "data_offset": 2048, 00:13:56.259 "data_size": 63488 00:13:56.259 } 00:13:56.259 ] 00:13:56.259 }' 00:13:56.259 14:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.259 14:46:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.519 14:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:56.519 14:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.519 14:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:56.519 14:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:56.519 14:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.520 14:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.520 14:46:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.520 14:46:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.520 14:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.520 14:46:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.520 14:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.520 "name": "raid_bdev1", 00:13:56.520 "uuid": "363c8b0a-a75a-4086-8528-fea2db69a8b2", 00:13:56.520 "strip_size_kb": 0, 00:13:56.520 "state": "online", 00:13:56.520 "raid_level": "raid1", 00:13:56.520 "superblock": true, 00:13:56.520 "num_base_bdevs": 4, 00:13:56.520 "num_base_bdevs_discovered": 3, 00:13:56.520 "num_base_bdevs_operational": 3, 00:13:56.520 "base_bdevs_list": [ 00:13:56.520 { 00:13:56.520 "name": null, 00:13:56.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.520 "is_configured": false, 00:13:56.520 "data_offset": 0, 00:13:56.520 "data_size": 63488 00:13:56.520 }, 00:13:56.520 { 00:13:56.520 "name": "BaseBdev2", 00:13:56.520 "uuid": "9adebf04-6910-5886-8345-b4130ad0a82e", 00:13:56.520 "is_configured": true, 00:13:56.520 "data_offset": 2048, 00:13:56.520 "data_size": 63488 00:13:56.520 }, 00:13:56.520 { 00:13:56.520 "name": "BaseBdev3", 00:13:56.520 "uuid": "bd28538d-522a-5c72-b137-26523eac7f7e", 00:13:56.520 "is_configured": true, 00:13:56.520 "data_offset": 2048, 00:13:56.520 "data_size": 63488 00:13:56.520 }, 00:13:56.520 { 00:13:56.520 "name": "BaseBdev4", 00:13:56.520 "uuid": "ba111a2d-af5d-5883-9cf9-b50789a4f0b9", 00:13:56.520 "is_configured": true, 00:13:56.520 "data_offset": 2048, 00:13:56.520 "data_size": 63488 00:13:56.520 } 00:13:56.520 ] 00:13:56.520 }' 00:13:56.520 14:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.780 14:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:56.780 14:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.780 14:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:56.780 14:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:56.780 14:46:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.780 14:46:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.780 [2024-12-09 14:46:34.708121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:56.780 [2024-12-09 14:46:34.724241] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:13:56.780 14:46:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.780 14:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:56.780 [2024-12-09 14:46:34.726322] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:57.718 14:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:57.718 14:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:57.718 14:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:57.718 14:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:57.718 14:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:57.718 14:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.718 14:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.718 14:46:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.718 14:46:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.718 14:46:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.718 14:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:57.718 "name": "raid_bdev1", 00:13:57.718 "uuid": "363c8b0a-a75a-4086-8528-fea2db69a8b2", 00:13:57.718 "strip_size_kb": 0, 00:13:57.718 "state": "online", 00:13:57.718 "raid_level": "raid1", 00:13:57.718 "superblock": true, 00:13:57.718 "num_base_bdevs": 4, 00:13:57.718 "num_base_bdevs_discovered": 4, 00:13:57.718 "num_base_bdevs_operational": 4, 00:13:57.718 "process": { 00:13:57.718 "type": "rebuild", 00:13:57.718 "target": "spare", 00:13:57.718 "progress": { 00:13:57.718 "blocks": 20480, 00:13:57.718 "percent": 32 00:13:57.718 } 00:13:57.718 }, 00:13:57.718 "base_bdevs_list": [ 00:13:57.718 { 00:13:57.718 "name": "spare", 00:13:57.718 "uuid": "1e959ffc-82ab-5625-b8e1-f50877fc79f1", 00:13:57.718 "is_configured": true, 00:13:57.718 "data_offset": 2048, 00:13:57.718 "data_size": 63488 00:13:57.718 }, 00:13:57.718 { 00:13:57.718 "name": "BaseBdev2", 00:13:57.718 "uuid": "9adebf04-6910-5886-8345-b4130ad0a82e", 00:13:57.718 "is_configured": true, 00:13:57.718 "data_offset": 2048, 00:13:57.718 "data_size": 63488 00:13:57.718 }, 00:13:57.718 { 00:13:57.718 "name": "BaseBdev3", 00:13:57.718 "uuid": "bd28538d-522a-5c72-b137-26523eac7f7e", 00:13:57.718 "is_configured": true, 00:13:57.718 "data_offset": 2048, 00:13:57.718 "data_size": 63488 00:13:57.718 }, 00:13:57.718 { 00:13:57.718 "name": "BaseBdev4", 00:13:57.718 "uuid": "ba111a2d-af5d-5883-9cf9-b50789a4f0b9", 00:13:57.718 "is_configured": true, 00:13:57.718 "data_offset": 2048, 00:13:57.718 "data_size": 63488 00:13:57.718 } 00:13:57.718 ] 00:13:57.718 }' 00:13:57.718 14:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:57.718 14:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:57.718 14:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:57.977 14:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:57.977 14:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:57.977 14:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:57.977 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:57.977 14:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:57.977 14:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:57.977 14:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:57.977 14:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:57.977 14:46:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.977 14:46:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.977 [2024-12-09 14:46:35.893998] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:57.977 [2024-12-09 14:46:36.032389] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:13:57.977 14:46:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.977 14:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:57.977 14:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:57.977 14:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:57.977 14:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:57.977 14:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:57.977 14:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:57.977 14:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:57.977 14:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.977 14:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.977 14:46:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.977 14:46:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.977 14:46:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.977 14:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:57.977 "name": "raid_bdev1", 00:13:57.977 "uuid": "363c8b0a-a75a-4086-8528-fea2db69a8b2", 00:13:57.977 "strip_size_kb": 0, 00:13:57.977 "state": "online", 00:13:57.977 "raid_level": "raid1", 00:13:57.977 "superblock": true, 00:13:57.977 "num_base_bdevs": 4, 00:13:57.977 "num_base_bdevs_discovered": 3, 00:13:57.977 "num_base_bdevs_operational": 3, 00:13:57.977 "process": { 00:13:57.977 "type": "rebuild", 00:13:57.977 "target": "spare", 00:13:57.977 "progress": { 00:13:57.977 "blocks": 24576, 00:13:57.977 "percent": 38 00:13:57.977 } 00:13:57.977 }, 00:13:57.977 "base_bdevs_list": [ 00:13:57.977 { 00:13:57.977 "name": "spare", 00:13:57.977 "uuid": "1e959ffc-82ab-5625-b8e1-f50877fc79f1", 00:13:57.977 "is_configured": true, 00:13:57.977 "data_offset": 2048, 00:13:57.977 "data_size": 63488 00:13:57.977 }, 00:13:57.977 { 00:13:57.977 "name": null, 00:13:57.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.977 "is_configured": false, 00:13:57.977 "data_offset": 0, 00:13:57.977 "data_size": 63488 00:13:57.977 }, 00:13:57.977 { 00:13:57.977 "name": "BaseBdev3", 00:13:57.977 "uuid": "bd28538d-522a-5c72-b137-26523eac7f7e", 00:13:57.977 "is_configured": true, 00:13:57.977 "data_offset": 2048, 00:13:57.977 "data_size": 63488 00:13:57.977 }, 00:13:57.977 { 00:13:57.977 "name": "BaseBdev4", 00:13:57.977 "uuid": "ba111a2d-af5d-5883-9cf9-b50789a4f0b9", 00:13:57.977 "is_configured": true, 00:13:57.977 "data_offset": 2048, 00:13:57.977 "data_size": 63488 00:13:57.977 } 00:13:57.977 ] 00:13:57.977 }' 00:13:57.977 14:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.236 14:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:58.236 14:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.236 14:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:58.236 14:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=468 00:13:58.236 14:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:58.236 14:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:58.236 14:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.236 14:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:58.236 14:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:58.236 14:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.236 14:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.236 14:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.236 14:46:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.236 14:46:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.236 14:46:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.236 14:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.236 "name": "raid_bdev1", 00:13:58.236 "uuid": "363c8b0a-a75a-4086-8528-fea2db69a8b2", 00:13:58.236 "strip_size_kb": 0, 00:13:58.236 "state": "online", 00:13:58.236 "raid_level": "raid1", 00:13:58.236 "superblock": true, 00:13:58.236 "num_base_bdevs": 4, 00:13:58.236 "num_base_bdevs_discovered": 3, 00:13:58.236 "num_base_bdevs_operational": 3, 00:13:58.236 "process": { 00:13:58.236 "type": "rebuild", 00:13:58.236 "target": "spare", 00:13:58.236 "progress": { 00:13:58.236 "blocks": 26624, 00:13:58.236 "percent": 41 00:13:58.236 } 00:13:58.236 }, 00:13:58.236 "base_bdevs_list": [ 00:13:58.236 { 00:13:58.236 "name": "spare", 00:13:58.236 "uuid": "1e959ffc-82ab-5625-b8e1-f50877fc79f1", 00:13:58.236 "is_configured": true, 00:13:58.236 "data_offset": 2048, 00:13:58.236 "data_size": 63488 00:13:58.236 }, 00:13:58.236 { 00:13:58.236 "name": null, 00:13:58.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.236 "is_configured": false, 00:13:58.236 "data_offset": 0, 00:13:58.236 "data_size": 63488 00:13:58.236 }, 00:13:58.236 { 00:13:58.236 "name": "BaseBdev3", 00:13:58.236 "uuid": "bd28538d-522a-5c72-b137-26523eac7f7e", 00:13:58.236 "is_configured": true, 00:13:58.236 "data_offset": 2048, 00:13:58.236 "data_size": 63488 00:13:58.237 }, 00:13:58.237 { 00:13:58.237 "name": "BaseBdev4", 00:13:58.237 "uuid": "ba111a2d-af5d-5883-9cf9-b50789a4f0b9", 00:13:58.237 "is_configured": true, 00:13:58.237 "data_offset": 2048, 00:13:58.237 "data_size": 63488 00:13:58.237 } 00:13:58.237 ] 00:13:58.237 }' 00:13:58.237 14:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.237 14:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:58.237 14:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.237 14:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:58.237 14:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:59.616 14:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:59.616 14:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:59.616 14:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:59.616 14:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:59.616 14:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:59.616 14:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:59.616 14:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.616 14:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.616 14:46:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.616 14:46:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.616 14:46:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.616 14:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:59.616 "name": "raid_bdev1", 00:13:59.616 "uuid": "363c8b0a-a75a-4086-8528-fea2db69a8b2", 00:13:59.616 "strip_size_kb": 0, 00:13:59.616 "state": "online", 00:13:59.616 "raid_level": "raid1", 00:13:59.616 "superblock": true, 00:13:59.616 "num_base_bdevs": 4, 00:13:59.616 "num_base_bdevs_discovered": 3, 00:13:59.616 "num_base_bdevs_operational": 3, 00:13:59.616 "process": { 00:13:59.616 "type": "rebuild", 00:13:59.616 "target": "spare", 00:13:59.616 "progress": { 00:13:59.616 "blocks": 49152, 00:13:59.616 "percent": 77 00:13:59.616 } 00:13:59.616 }, 00:13:59.616 "base_bdevs_list": [ 00:13:59.616 { 00:13:59.616 "name": "spare", 00:13:59.616 "uuid": "1e959ffc-82ab-5625-b8e1-f50877fc79f1", 00:13:59.616 "is_configured": true, 00:13:59.616 "data_offset": 2048, 00:13:59.616 "data_size": 63488 00:13:59.616 }, 00:13:59.616 { 00:13:59.616 "name": null, 00:13:59.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.616 "is_configured": false, 00:13:59.616 "data_offset": 0, 00:13:59.616 "data_size": 63488 00:13:59.616 }, 00:13:59.616 { 00:13:59.616 "name": "BaseBdev3", 00:13:59.616 "uuid": "bd28538d-522a-5c72-b137-26523eac7f7e", 00:13:59.616 "is_configured": true, 00:13:59.616 "data_offset": 2048, 00:13:59.616 "data_size": 63488 00:13:59.616 }, 00:13:59.616 { 00:13:59.616 "name": "BaseBdev4", 00:13:59.616 "uuid": "ba111a2d-af5d-5883-9cf9-b50789a4f0b9", 00:13:59.616 "is_configured": true, 00:13:59.616 "data_offset": 2048, 00:13:59.616 "data_size": 63488 00:13:59.616 } 00:13:59.616 ] 00:13:59.616 }' 00:13:59.616 14:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:59.616 14:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:59.616 14:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:59.616 14:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:59.616 14:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:59.876 [2024-12-09 14:46:37.942032] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:59.876 [2024-12-09 14:46:37.942235] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:59.876 [2024-12-09 14:46:37.942423] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.455 14:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:00.455 14:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:00.455 14:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.455 14:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:00.455 14:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:00.455 14:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.455 14:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.455 14:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.455 14:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.455 14:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.455 14:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.455 14:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.455 "name": "raid_bdev1", 00:14:00.455 "uuid": "363c8b0a-a75a-4086-8528-fea2db69a8b2", 00:14:00.455 "strip_size_kb": 0, 00:14:00.455 "state": "online", 00:14:00.455 "raid_level": "raid1", 00:14:00.455 "superblock": true, 00:14:00.455 "num_base_bdevs": 4, 00:14:00.455 "num_base_bdevs_discovered": 3, 00:14:00.455 "num_base_bdevs_operational": 3, 00:14:00.455 "base_bdevs_list": [ 00:14:00.455 { 00:14:00.455 "name": "spare", 00:14:00.455 "uuid": "1e959ffc-82ab-5625-b8e1-f50877fc79f1", 00:14:00.455 "is_configured": true, 00:14:00.455 "data_offset": 2048, 00:14:00.455 "data_size": 63488 00:14:00.455 }, 00:14:00.455 { 00:14:00.455 "name": null, 00:14:00.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.455 "is_configured": false, 00:14:00.455 "data_offset": 0, 00:14:00.455 "data_size": 63488 00:14:00.455 }, 00:14:00.455 { 00:14:00.455 "name": "BaseBdev3", 00:14:00.455 "uuid": "bd28538d-522a-5c72-b137-26523eac7f7e", 00:14:00.455 "is_configured": true, 00:14:00.455 "data_offset": 2048, 00:14:00.455 "data_size": 63488 00:14:00.455 }, 00:14:00.455 { 00:14:00.455 "name": "BaseBdev4", 00:14:00.455 "uuid": "ba111a2d-af5d-5883-9cf9-b50789a4f0b9", 00:14:00.455 "is_configured": true, 00:14:00.455 "data_offset": 2048, 00:14:00.455 "data_size": 63488 00:14:00.455 } 00:14:00.455 ] 00:14:00.455 }' 00:14:00.455 14:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.455 14:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:00.455 14:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.715 14:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:00.715 14:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:00.715 14:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:00.715 14:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.715 14:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:00.715 14:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:00.715 14:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.715 14:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.715 14:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.715 14:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.715 14:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.715 14:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.715 14:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.715 "name": "raid_bdev1", 00:14:00.715 "uuid": "363c8b0a-a75a-4086-8528-fea2db69a8b2", 00:14:00.715 "strip_size_kb": 0, 00:14:00.715 "state": "online", 00:14:00.715 "raid_level": "raid1", 00:14:00.715 "superblock": true, 00:14:00.715 "num_base_bdevs": 4, 00:14:00.715 "num_base_bdevs_discovered": 3, 00:14:00.715 "num_base_bdevs_operational": 3, 00:14:00.715 "base_bdevs_list": [ 00:14:00.715 { 00:14:00.715 "name": "spare", 00:14:00.715 "uuid": "1e959ffc-82ab-5625-b8e1-f50877fc79f1", 00:14:00.715 "is_configured": true, 00:14:00.715 "data_offset": 2048, 00:14:00.715 "data_size": 63488 00:14:00.715 }, 00:14:00.715 { 00:14:00.715 "name": null, 00:14:00.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.715 "is_configured": false, 00:14:00.715 "data_offset": 0, 00:14:00.715 "data_size": 63488 00:14:00.715 }, 00:14:00.715 { 00:14:00.715 "name": "BaseBdev3", 00:14:00.715 "uuid": "bd28538d-522a-5c72-b137-26523eac7f7e", 00:14:00.715 "is_configured": true, 00:14:00.715 "data_offset": 2048, 00:14:00.715 "data_size": 63488 00:14:00.715 }, 00:14:00.715 { 00:14:00.715 "name": "BaseBdev4", 00:14:00.715 "uuid": "ba111a2d-af5d-5883-9cf9-b50789a4f0b9", 00:14:00.715 "is_configured": true, 00:14:00.715 "data_offset": 2048, 00:14:00.715 "data_size": 63488 00:14:00.715 } 00:14:00.715 ] 00:14:00.715 }' 00:14:00.715 14:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.715 14:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:00.715 14:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.715 14:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:00.715 14:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:00.715 14:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.715 14:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.715 14:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:00.715 14:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:00.715 14:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:00.715 14:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.715 14:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.715 14:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.715 14:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.715 14:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.715 14:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.715 14:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.715 14:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.715 14:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.715 14:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.715 "name": "raid_bdev1", 00:14:00.715 "uuid": "363c8b0a-a75a-4086-8528-fea2db69a8b2", 00:14:00.715 "strip_size_kb": 0, 00:14:00.715 "state": "online", 00:14:00.715 "raid_level": "raid1", 00:14:00.715 "superblock": true, 00:14:00.715 "num_base_bdevs": 4, 00:14:00.715 "num_base_bdevs_discovered": 3, 00:14:00.715 "num_base_bdevs_operational": 3, 00:14:00.715 "base_bdevs_list": [ 00:14:00.715 { 00:14:00.715 "name": "spare", 00:14:00.715 "uuid": "1e959ffc-82ab-5625-b8e1-f50877fc79f1", 00:14:00.715 "is_configured": true, 00:14:00.715 "data_offset": 2048, 00:14:00.715 "data_size": 63488 00:14:00.715 }, 00:14:00.715 { 00:14:00.715 "name": null, 00:14:00.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.715 "is_configured": false, 00:14:00.715 "data_offset": 0, 00:14:00.715 "data_size": 63488 00:14:00.715 }, 00:14:00.715 { 00:14:00.715 "name": "BaseBdev3", 00:14:00.715 "uuid": "bd28538d-522a-5c72-b137-26523eac7f7e", 00:14:00.715 "is_configured": true, 00:14:00.715 "data_offset": 2048, 00:14:00.715 "data_size": 63488 00:14:00.715 }, 00:14:00.715 { 00:14:00.715 "name": "BaseBdev4", 00:14:00.715 "uuid": "ba111a2d-af5d-5883-9cf9-b50789a4f0b9", 00:14:00.715 "is_configured": true, 00:14:00.715 "data_offset": 2048, 00:14:00.715 "data_size": 63488 00:14:00.715 } 00:14:00.715 ] 00:14:00.715 }' 00:14:00.715 14:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.715 14:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.284 14:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:01.284 14:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.284 14:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.284 [2024-12-09 14:46:39.207385] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:01.284 [2024-12-09 14:46:39.207477] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:01.284 [2024-12-09 14:46:39.207620] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:01.284 [2024-12-09 14:46:39.207751] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:01.284 [2024-12-09 14:46:39.207808] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:01.284 14:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.284 14:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:01.284 14:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.284 14:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.284 14:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.284 14:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.284 14:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:01.284 14:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:01.284 14:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:01.284 14:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:01.284 14:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:01.284 14:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:01.284 14:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:01.284 14:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:01.284 14:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:01.284 14:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:01.284 14:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:01.284 14:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:01.284 14:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:01.543 /dev/nbd0 00:14:01.543 14:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:01.543 14:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:01.543 14:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:01.543 14:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:01.543 14:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:01.543 14:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:01.543 14:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:01.543 14:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:01.543 14:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:01.543 14:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:01.543 14:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:01.543 1+0 records in 00:14:01.543 1+0 records out 00:14:01.543 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000592192 s, 6.9 MB/s 00:14:01.544 14:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.544 14:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:01.544 14:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.544 14:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:01.544 14:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:01.544 14:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:01.544 14:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:01.544 14:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:01.803 /dev/nbd1 00:14:01.803 14:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:01.803 14:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:01.803 14:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:01.803 14:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:01.803 14:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:01.803 14:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:01.803 14:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:01.803 14:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:01.803 14:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:01.803 14:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:01.803 14:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:01.803 1+0 records in 00:14:01.803 1+0 records out 00:14:01.803 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000456934 s, 9.0 MB/s 00:14:01.803 14:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.803 14:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:01.803 14:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.803 14:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:01.803 14:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:01.803 14:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:01.803 14:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:01.803 14:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:02.062 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:02.062 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:02.062 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:02.062 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:02.062 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:02.062 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:02.062 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:02.321 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:02.321 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:02.321 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:02.321 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:02.321 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:02.321 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:02.321 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:02.321 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:02.321 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:02.322 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:02.581 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:02.581 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:02.581 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:02.581 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:02.581 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:02.581 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:02.581 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:02.581 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:02.581 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:02.581 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:02.581 14:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.581 14:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.581 14:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.581 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:02.581 14:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.581 14:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.581 [2024-12-09 14:46:40.591722] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:02.581 [2024-12-09 14:46:40.591822] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.581 [2024-12-09 14:46:40.591864] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:02.581 [2024-12-09 14:46:40.591893] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.581 [2024-12-09 14:46:40.594221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.581 [2024-12-09 14:46:40.594291] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:02.581 [2024-12-09 14:46:40.594432] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:02.581 [2024-12-09 14:46:40.594500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:02.581 [2024-12-09 14:46:40.594682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:02.581 [2024-12-09 14:46:40.594814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:02.581 spare 00:14:02.581 14:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.581 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:02.581 14:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.581 14:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.581 [2024-12-09 14:46:40.694766] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:02.581 [2024-12-09 14:46:40.694898] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:02.581 [2024-12-09 14:46:40.695291] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:02.581 [2024-12-09 14:46:40.695564] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:02.581 [2024-12-09 14:46:40.695629] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:02.581 [2024-12-09 14:46:40.695889] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.581 14:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.841 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:02.841 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.841 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.841 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.841 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.842 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:02.842 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.842 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.842 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.842 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.842 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.842 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.842 14:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.842 14:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.842 14:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.842 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.842 "name": "raid_bdev1", 00:14:02.842 "uuid": "363c8b0a-a75a-4086-8528-fea2db69a8b2", 00:14:02.842 "strip_size_kb": 0, 00:14:02.842 "state": "online", 00:14:02.842 "raid_level": "raid1", 00:14:02.842 "superblock": true, 00:14:02.842 "num_base_bdevs": 4, 00:14:02.842 "num_base_bdevs_discovered": 3, 00:14:02.842 "num_base_bdevs_operational": 3, 00:14:02.842 "base_bdevs_list": [ 00:14:02.842 { 00:14:02.842 "name": "spare", 00:14:02.842 "uuid": "1e959ffc-82ab-5625-b8e1-f50877fc79f1", 00:14:02.842 "is_configured": true, 00:14:02.842 "data_offset": 2048, 00:14:02.842 "data_size": 63488 00:14:02.842 }, 00:14:02.842 { 00:14:02.842 "name": null, 00:14:02.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.842 "is_configured": false, 00:14:02.842 "data_offset": 2048, 00:14:02.842 "data_size": 63488 00:14:02.842 }, 00:14:02.842 { 00:14:02.842 "name": "BaseBdev3", 00:14:02.842 "uuid": "bd28538d-522a-5c72-b137-26523eac7f7e", 00:14:02.842 "is_configured": true, 00:14:02.842 "data_offset": 2048, 00:14:02.842 "data_size": 63488 00:14:02.842 }, 00:14:02.842 { 00:14:02.842 "name": "BaseBdev4", 00:14:02.842 "uuid": "ba111a2d-af5d-5883-9cf9-b50789a4f0b9", 00:14:02.842 "is_configured": true, 00:14:02.842 "data_offset": 2048, 00:14:02.842 "data_size": 63488 00:14:02.842 } 00:14:02.842 ] 00:14:02.842 }' 00:14:02.842 14:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.842 14:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.101 14:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:03.101 14:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.101 14:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:03.101 14:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:03.101 14:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.101 14:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.101 14:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.101 14:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.101 14:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.360 14:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.360 14:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.360 "name": "raid_bdev1", 00:14:03.360 "uuid": "363c8b0a-a75a-4086-8528-fea2db69a8b2", 00:14:03.360 "strip_size_kb": 0, 00:14:03.360 "state": "online", 00:14:03.360 "raid_level": "raid1", 00:14:03.360 "superblock": true, 00:14:03.360 "num_base_bdevs": 4, 00:14:03.360 "num_base_bdevs_discovered": 3, 00:14:03.360 "num_base_bdevs_operational": 3, 00:14:03.360 "base_bdevs_list": [ 00:14:03.360 { 00:14:03.360 "name": "spare", 00:14:03.360 "uuid": "1e959ffc-82ab-5625-b8e1-f50877fc79f1", 00:14:03.360 "is_configured": true, 00:14:03.360 "data_offset": 2048, 00:14:03.360 "data_size": 63488 00:14:03.360 }, 00:14:03.360 { 00:14:03.360 "name": null, 00:14:03.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.360 "is_configured": false, 00:14:03.360 "data_offset": 2048, 00:14:03.360 "data_size": 63488 00:14:03.360 }, 00:14:03.360 { 00:14:03.360 "name": "BaseBdev3", 00:14:03.360 "uuid": "bd28538d-522a-5c72-b137-26523eac7f7e", 00:14:03.360 "is_configured": true, 00:14:03.360 "data_offset": 2048, 00:14:03.360 "data_size": 63488 00:14:03.360 }, 00:14:03.360 { 00:14:03.360 "name": "BaseBdev4", 00:14:03.360 "uuid": "ba111a2d-af5d-5883-9cf9-b50789a4f0b9", 00:14:03.360 "is_configured": true, 00:14:03.360 "data_offset": 2048, 00:14:03.360 "data_size": 63488 00:14:03.360 } 00:14:03.360 ] 00:14:03.360 }' 00:14:03.360 14:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.360 14:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:03.360 14:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.360 14:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:03.360 14:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.360 14:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.360 14:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.360 14:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:03.360 14:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.360 14:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:03.360 14:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:03.360 14:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.360 14:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.360 [2024-12-09 14:46:41.407391] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:03.360 14:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.360 14:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:03.360 14:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.360 14:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.360 14:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:03.360 14:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:03.360 14:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:03.360 14:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.360 14:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.360 14:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.360 14:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.360 14:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.360 14:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.360 14:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.360 14:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.360 14:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.360 14:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.360 "name": "raid_bdev1", 00:14:03.360 "uuid": "363c8b0a-a75a-4086-8528-fea2db69a8b2", 00:14:03.360 "strip_size_kb": 0, 00:14:03.360 "state": "online", 00:14:03.360 "raid_level": "raid1", 00:14:03.360 "superblock": true, 00:14:03.360 "num_base_bdevs": 4, 00:14:03.360 "num_base_bdevs_discovered": 2, 00:14:03.360 "num_base_bdevs_operational": 2, 00:14:03.360 "base_bdevs_list": [ 00:14:03.360 { 00:14:03.360 "name": null, 00:14:03.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.360 "is_configured": false, 00:14:03.360 "data_offset": 0, 00:14:03.360 "data_size": 63488 00:14:03.360 }, 00:14:03.360 { 00:14:03.360 "name": null, 00:14:03.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.360 "is_configured": false, 00:14:03.360 "data_offset": 2048, 00:14:03.360 "data_size": 63488 00:14:03.360 }, 00:14:03.360 { 00:14:03.360 "name": "BaseBdev3", 00:14:03.360 "uuid": "bd28538d-522a-5c72-b137-26523eac7f7e", 00:14:03.360 "is_configured": true, 00:14:03.360 "data_offset": 2048, 00:14:03.360 "data_size": 63488 00:14:03.360 }, 00:14:03.360 { 00:14:03.360 "name": "BaseBdev4", 00:14:03.360 "uuid": "ba111a2d-af5d-5883-9cf9-b50789a4f0b9", 00:14:03.360 "is_configured": true, 00:14:03.360 "data_offset": 2048, 00:14:03.360 "data_size": 63488 00:14:03.360 } 00:14:03.360 ] 00:14:03.360 }' 00:14:03.361 14:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.361 14:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.929 14:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:03.929 14:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.929 14:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.929 [2024-12-09 14:46:41.890917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:03.929 [2024-12-09 14:46:41.891224] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:03.929 [2024-12-09 14:46:41.891302] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:03.929 [2024-12-09 14:46:41.891436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:03.929 [2024-12-09 14:46:41.907991] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:14:03.929 14:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.929 14:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:03.929 [2024-12-09 14:46:41.910109] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:04.866 14:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.866 14:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.866 14:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.866 14:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.866 14:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.866 14:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.866 14:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.866 14:46:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.866 14:46:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.866 14:46:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.866 14:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.866 "name": "raid_bdev1", 00:14:04.866 "uuid": "363c8b0a-a75a-4086-8528-fea2db69a8b2", 00:14:04.866 "strip_size_kb": 0, 00:14:04.866 "state": "online", 00:14:04.866 "raid_level": "raid1", 00:14:04.866 "superblock": true, 00:14:04.866 "num_base_bdevs": 4, 00:14:04.866 "num_base_bdevs_discovered": 3, 00:14:04.866 "num_base_bdevs_operational": 3, 00:14:04.866 "process": { 00:14:04.866 "type": "rebuild", 00:14:04.866 "target": "spare", 00:14:04.866 "progress": { 00:14:04.866 "blocks": 20480, 00:14:04.866 "percent": 32 00:14:04.866 } 00:14:04.866 }, 00:14:04.866 "base_bdevs_list": [ 00:14:04.866 { 00:14:04.866 "name": "spare", 00:14:04.866 "uuid": "1e959ffc-82ab-5625-b8e1-f50877fc79f1", 00:14:04.866 "is_configured": true, 00:14:04.866 "data_offset": 2048, 00:14:04.866 "data_size": 63488 00:14:04.866 }, 00:14:04.866 { 00:14:04.866 "name": null, 00:14:04.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.866 "is_configured": false, 00:14:04.866 "data_offset": 2048, 00:14:04.866 "data_size": 63488 00:14:04.866 }, 00:14:04.866 { 00:14:04.866 "name": "BaseBdev3", 00:14:04.866 "uuid": "bd28538d-522a-5c72-b137-26523eac7f7e", 00:14:04.866 "is_configured": true, 00:14:04.866 "data_offset": 2048, 00:14:04.866 "data_size": 63488 00:14:04.866 }, 00:14:04.866 { 00:14:04.866 "name": "BaseBdev4", 00:14:04.866 "uuid": "ba111a2d-af5d-5883-9cf9-b50789a4f0b9", 00:14:04.866 "is_configured": true, 00:14:04.866 "data_offset": 2048, 00:14:04.866 "data_size": 63488 00:14:04.866 } 00:14:04.866 ] 00:14:04.866 }' 00:14:04.866 14:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.125 14:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:05.125 14:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.125 14:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:05.125 14:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:05.125 14:46:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.125 14:46:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.125 [2024-12-09 14:46:43.069810] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:05.125 [2024-12-09 14:46:43.116178] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:05.125 [2024-12-09 14:46:43.116346] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.125 [2024-12-09 14:46:43.116412] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:05.125 [2024-12-09 14:46:43.116439] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:05.125 14:46:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.125 14:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:05.125 14:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.125 14:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.125 14:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:05.125 14:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:05.126 14:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:05.126 14:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.126 14:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.126 14:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.126 14:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.126 14:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.126 14:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.126 14:46:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.126 14:46:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.126 14:46:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.126 14:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.126 "name": "raid_bdev1", 00:14:05.126 "uuid": "363c8b0a-a75a-4086-8528-fea2db69a8b2", 00:14:05.126 "strip_size_kb": 0, 00:14:05.126 "state": "online", 00:14:05.126 "raid_level": "raid1", 00:14:05.126 "superblock": true, 00:14:05.126 "num_base_bdevs": 4, 00:14:05.126 "num_base_bdevs_discovered": 2, 00:14:05.126 "num_base_bdevs_operational": 2, 00:14:05.126 "base_bdevs_list": [ 00:14:05.126 { 00:14:05.126 "name": null, 00:14:05.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.126 "is_configured": false, 00:14:05.126 "data_offset": 0, 00:14:05.126 "data_size": 63488 00:14:05.126 }, 00:14:05.126 { 00:14:05.126 "name": null, 00:14:05.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.126 "is_configured": false, 00:14:05.126 "data_offset": 2048, 00:14:05.126 "data_size": 63488 00:14:05.126 }, 00:14:05.126 { 00:14:05.126 "name": "BaseBdev3", 00:14:05.126 "uuid": "bd28538d-522a-5c72-b137-26523eac7f7e", 00:14:05.126 "is_configured": true, 00:14:05.126 "data_offset": 2048, 00:14:05.126 "data_size": 63488 00:14:05.126 }, 00:14:05.126 { 00:14:05.126 "name": "BaseBdev4", 00:14:05.126 "uuid": "ba111a2d-af5d-5883-9cf9-b50789a4f0b9", 00:14:05.126 "is_configured": true, 00:14:05.126 "data_offset": 2048, 00:14:05.126 "data_size": 63488 00:14:05.126 } 00:14:05.126 ] 00:14:05.126 }' 00:14:05.126 14:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.126 14:46:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.694 14:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:05.694 14:46:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.694 14:46:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.694 [2024-12-09 14:46:43.635217] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:05.694 [2024-12-09 14:46:43.635290] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.694 [2024-12-09 14:46:43.635326] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:14:05.694 [2024-12-09 14:46:43.635337] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.694 [2024-12-09 14:46:43.635912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.694 [2024-12-09 14:46:43.635940] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:05.694 [2024-12-09 14:46:43.636050] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:05.694 [2024-12-09 14:46:43.636065] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:05.694 [2024-12-09 14:46:43.636081] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:05.694 [2024-12-09 14:46:43.636122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:05.694 [2024-12-09 14:46:43.653879] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:14:05.694 spare 00:14:05.694 14:46:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.694 14:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:05.694 [2024-12-09 14:46:43.656053] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:06.674 14:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:06.674 14:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.674 14:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:06.674 14:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:06.674 14:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.674 14:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.674 14:46:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.674 14:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.674 14:46:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.674 14:46:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.674 14:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.674 "name": "raid_bdev1", 00:14:06.674 "uuid": "363c8b0a-a75a-4086-8528-fea2db69a8b2", 00:14:06.674 "strip_size_kb": 0, 00:14:06.674 "state": "online", 00:14:06.674 "raid_level": "raid1", 00:14:06.674 "superblock": true, 00:14:06.674 "num_base_bdevs": 4, 00:14:06.674 "num_base_bdevs_discovered": 3, 00:14:06.674 "num_base_bdevs_operational": 3, 00:14:06.674 "process": { 00:14:06.674 "type": "rebuild", 00:14:06.674 "target": "spare", 00:14:06.674 "progress": { 00:14:06.674 "blocks": 20480, 00:14:06.674 "percent": 32 00:14:06.674 } 00:14:06.674 }, 00:14:06.674 "base_bdevs_list": [ 00:14:06.674 { 00:14:06.674 "name": "spare", 00:14:06.674 "uuid": "1e959ffc-82ab-5625-b8e1-f50877fc79f1", 00:14:06.674 "is_configured": true, 00:14:06.674 "data_offset": 2048, 00:14:06.674 "data_size": 63488 00:14:06.674 }, 00:14:06.674 { 00:14:06.674 "name": null, 00:14:06.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.674 "is_configured": false, 00:14:06.674 "data_offset": 2048, 00:14:06.674 "data_size": 63488 00:14:06.674 }, 00:14:06.674 { 00:14:06.674 "name": "BaseBdev3", 00:14:06.674 "uuid": "bd28538d-522a-5c72-b137-26523eac7f7e", 00:14:06.674 "is_configured": true, 00:14:06.674 "data_offset": 2048, 00:14:06.674 "data_size": 63488 00:14:06.674 }, 00:14:06.674 { 00:14:06.674 "name": "BaseBdev4", 00:14:06.674 "uuid": "ba111a2d-af5d-5883-9cf9-b50789a4f0b9", 00:14:06.674 "is_configured": true, 00:14:06.674 "data_offset": 2048, 00:14:06.674 "data_size": 63488 00:14:06.674 } 00:14:06.674 ] 00:14:06.674 }' 00:14:06.674 14:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.674 14:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:06.674 14:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.939 14:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:06.939 14:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:06.939 14:46:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.939 14:46:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.939 [2024-12-09 14:46:44.807427] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:06.939 [2024-12-09 14:46:44.861953] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:06.939 [2024-12-09 14:46:44.862027] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.939 [2024-12-09 14:46:44.862046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:06.939 [2024-12-09 14:46:44.862056] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:06.939 14:46:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.939 14:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:06.939 14:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.939 14:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.939 14:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.939 14:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.939 14:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:06.939 14:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.939 14:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.939 14:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.939 14:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.939 14:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.939 14:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.939 14:46:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.939 14:46:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.939 14:46:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.939 14:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.939 "name": "raid_bdev1", 00:14:06.939 "uuid": "363c8b0a-a75a-4086-8528-fea2db69a8b2", 00:14:06.939 "strip_size_kb": 0, 00:14:06.939 "state": "online", 00:14:06.939 "raid_level": "raid1", 00:14:06.939 "superblock": true, 00:14:06.939 "num_base_bdevs": 4, 00:14:06.939 "num_base_bdevs_discovered": 2, 00:14:06.939 "num_base_bdevs_operational": 2, 00:14:06.939 "base_bdevs_list": [ 00:14:06.939 { 00:14:06.939 "name": null, 00:14:06.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.939 "is_configured": false, 00:14:06.939 "data_offset": 0, 00:14:06.939 "data_size": 63488 00:14:06.939 }, 00:14:06.939 { 00:14:06.939 "name": null, 00:14:06.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.939 "is_configured": false, 00:14:06.939 "data_offset": 2048, 00:14:06.939 "data_size": 63488 00:14:06.939 }, 00:14:06.939 { 00:14:06.939 "name": "BaseBdev3", 00:14:06.939 "uuid": "bd28538d-522a-5c72-b137-26523eac7f7e", 00:14:06.939 "is_configured": true, 00:14:06.939 "data_offset": 2048, 00:14:06.939 "data_size": 63488 00:14:06.939 }, 00:14:06.939 { 00:14:06.939 "name": "BaseBdev4", 00:14:06.939 "uuid": "ba111a2d-af5d-5883-9cf9-b50789a4f0b9", 00:14:06.939 "is_configured": true, 00:14:06.939 "data_offset": 2048, 00:14:06.939 "data_size": 63488 00:14:06.939 } 00:14:06.939 ] 00:14:06.939 }' 00:14:06.939 14:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.939 14:46:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.505 14:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:07.505 14:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.505 14:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:07.505 14:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:07.505 14:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.505 14:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.505 14:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.505 14:46:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.505 14:46:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.505 14:46:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.505 14:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.505 "name": "raid_bdev1", 00:14:07.505 "uuid": "363c8b0a-a75a-4086-8528-fea2db69a8b2", 00:14:07.505 "strip_size_kb": 0, 00:14:07.505 "state": "online", 00:14:07.505 "raid_level": "raid1", 00:14:07.505 "superblock": true, 00:14:07.505 "num_base_bdevs": 4, 00:14:07.505 "num_base_bdevs_discovered": 2, 00:14:07.505 "num_base_bdevs_operational": 2, 00:14:07.505 "base_bdevs_list": [ 00:14:07.505 { 00:14:07.505 "name": null, 00:14:07.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.505 "is_configured": false, 00:14:07.505 "data_offset": 0, 00:14:07.505 "data_size": 63488 00:14:07.505 }, 00:14:07.505 { 00:14:07.505 "name": null, 00:14:07.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.505 "is_configured": false, 00:14:07.505 "data_offset": 2048, 00:14:07.505 "data_size": 63488 00:14:07.505 }, 00:14:07.505 { 00:14:07.505 "name": "BaseBdev3", 00:14:07.505 "uuid": "bd28538d-522a-5c72-b137-26523eac7f7e", 00:14:07.505 "is_configured": true, 00:14:07.505 "data_offset": 2048, 00:14:07.505 "data_size": 63488 00:14:07.505 }, 00:14:07.505 { 00:14:07.505 "name": "BaseBdev4", 00:14:07.505 "uuid": "ba111a2d-af5d-5883-9cf9-b50789a4f0b9", 00:14:07.505 "is_configured": true, 00:14:07.505 "data_offset": 2048, 00:14:07.505 "data_size": 63488 00:14:07.505 } 00:14:07.505 ] 00:14:07.505 }' 00:14:07.505 14:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.505 14:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:07.505 14:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.505 14:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:07.505 14:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:07.505 14:46:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.505 14:46:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.505 14:46:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.505 14:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:07.505 14:46:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.505 14:46:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.505 [2024-12-09 14:46:45.484046] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:07.505 [2024-12-09 14:46:45.484115] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.505 [2024-12-09 14:46:45.484138] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:14:07.505 [2024-12-09 14:46:45.484151] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.505 [2024-12-09 14:46:45.484688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.505 [2024-12-09 14:46:45.484799] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:07.505 [2024-12-09 14:46:45.484903] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:07.505 [2024-12-09 14:46:45.484924] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:07.505 [2024-12-09 14:46:45.484933] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:07.505 [2024-12-09 14:46:45.484962] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:07.505 BaseBdev1 00:14:07.505 14:46:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.505 14:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:08.443 14:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:08.443 14:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.443 14:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.443 14:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:08.443 14:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:08.443 14:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:08.443 14:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.443 14:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.443 14:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.443 14:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.443 14:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.443 14:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.443 14:46:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.443 14:46:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.443 14:46:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.443 14:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.443 "name": "raid_bdev1", 00:14:08.443 "uuid": "363c8b0a-a75a-4086-8528-fea2db69a8b2", 00:14:08.443 "strip_size_kb": 0, 00:14:08.443 "state": "online", 00:14:08.443 "raid_level": "raid1", 00:14:08.443 "superblock": true, 00:14:08.443 "num_base_bdevs": 4, 00:14:08.443 "num_base_bdevs_discovered": 2, 00:14:08.443 "num_base_bdevs_operational": 2, 00:14:08.443 "base_bdevs_list": [ 00:14:08.443 { 00:14:08.443 "name": null, 00:14:08.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.443 "is_configured": false, 00:14:08.443 "data_offset": 0, 00:14:08.443 "data_size": 63488 00:14:08.443 }, 00:14:08.443 { 00:14:08.443 "name": null, 00:14:08.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.443 "is_configured": false, 00:14:08.443 "data_offset": 2048, 00:14:08.443 "data_size": 63488 00:14:08.443 }, 00:14:08.443 { 00:14:08.443 "name": "BaseBdev3", 00:14:08.443 "uuid": "bd28538d-522a-5c72-b137-26523eac7f7e", 00:14:08.443 "is_configured": true, 00:14:08.443 "data_offset": 2048, 00:14:08.443 "data_size": 63488 00:14:08.443 }, 00:14:08.443 { 00:14:08.443 "name": "BaseBdev4", 00:14:08.443 "uuid": "ba111a2d-af5d-5883-9cf9-b50789a4f0b9", 00:14:08.443 "is_configured": true, 00:14:08.443 "data_offset": 2048, 00:14:08.443 "data_size": 63488 00:14:08.443 } 00:14:08.443 ] 00:14:08.443 }' 00:14:08.443 14:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.443 14:46:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.010 14:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:09.010 14:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.010 14:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:09.010 14:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:09.010 14:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.010 14:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.010 14:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.010 14:46:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.010 14:46:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.010 14:46:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.010 14:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.010 "name": "raid_bdev1", 00:14:09.010 "uuid": "363c8b0a-a75a-4086-8528-fea2db69a8b2", 00:14:09.010 "strip_size_kb": 0, 00:14:09.010 "state": "online", 00:14:09.010 "raid_level": "raid1", 00:14:09.010 "superblock": true, 00:14:09.010 "num_base_bdevs": 4, 00:14:09.010 "num_base_bdevs_discovered": 2, 00:14:09.010 "num_base_bdevs_operational": 2, 00:14:09.010 "base_bdevs_list": [ 00:14:09.010 { 00:14:09.010 "name": null, 00:14:09.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.010 "is_configured": false, 00:14:09.010 "data_offset": 0, 00:14:09.010 "data_size": 63488 00:14:09.010 }, 00:14:09.011 { 00:14:09.011 "name": null, 00:14:09.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.011 "is_configured": false, 00:14:09.011 "data_offset": 2048, 00:14:09.011 "data_size": 63488 00:14:09.011 }, 00:14:09.011 { 00:14:09.011 "name": "BaseBdev3", 00:14:09.011 "uuid": "bd28538d-522a-5c72-b137-26523eac7f7e", 00:14:09.011 "is_configured": true, 00:14:09.011 "data_offset": 2048, 00:14:09.011 "data_size": 63488 00:14:09.011 }, 00:14:09.011 { 00:14:09.011 "name": "BaseBdev4", 00:14:09.011 "uuid": "ba111a2d-af5d-5883-9cf9-b50789a4f0b9", 00:14:09.011 "is_configured": true, 00:14:09.011 "data_offset": 2048, 00:14:09.011 "data_size": 63488 00:14:09.011 } 00:14:09.011 ] 00:14:09.011 }' 00:14:09.011 14:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.011 14:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:09.011 14:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.011 14:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:09.011 14:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:09.011 14:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:14:09.011 14:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:09.011 14:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:09.011 14:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:09.011 14:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:09.011 14:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:09.011 14:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:09.011 14:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.011 14:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.011 [2024-12-09 14:46:47.121432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:09.011 [2024-12-09 14:46:47.121727] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:09.011 [2024-12-09 14:46:47.121800] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:09.011 request: 00:14:09.011 { 00:14:09.011 "base_bdev": "BaseBdev1", 00:14:09.011 "raid_bdev": "raid_bdev1", 00:14:09.011 "method": "bdev_raid_add_base_bdev", 00:14:09.011 "req_id": 1 00:14:09.011 } 00:14:09.011 Got JSON-RPC error response 00:14:09.011 response: 00:14:09.011 { 00:14:09.011 "code": -22, 00:14:09.011 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:09.011 } 00:14:09.011 14:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:09.011 14:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:14:09.011 14:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:09.011 14:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:09.011 14:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:09.011 14:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:10.389 14:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:10.389 14:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.389 14:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.389 14:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:10.389 14:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:10.389 14:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:10.389 14:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.389 14:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.389 14:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.389 14:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.389 14:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.389 14:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.389 14:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.389 14:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.389 14:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.389 14:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.389 "name": "raid_bdev1", 00:14:10.389 "uuid": "363c8b0a-a75a-4086-8528-fea2db69a8b2", 00:14:10.389 "strip_size_kb": 0, 00:14:10.389 "state": "online", 00:14:10.389 "raid_level": "raid1", 00:14:10.389 "superblock": true, 00:14:10.389 "num_base_bdevs": 4, 00:14:10.389 "num_base_bdevs_discovered": 2, 00:14:10.389 "num_base_bdevs_operational": 2, 00:14:10.389 "base_bdevs_list": [ 00:14:10.389 { 00:14:10.389 "name": null, 00:14:10.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.389 "is_configured": false, 00:14:10.389 "data_offset": 0, 00:14:10.389 "data_size": 63488 00:14:10.389 }, 00:14:10.389 { 00:14:10.389 "name": null, 00:14:10.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.389 "is_configured": false, 00:14:10.389 "data_offset": 2048, 00:14:10.389 "data_size": 63488 00:14:10.389 }, 00:14:10.389 { 00:14:10.389 "name": "BaseBdev3", 00:14:10.389 "uuid": "bd28538d-522a-5c72-b137-26523eac7f7e", 00:14:10.389 "is_configured": true, 00:14:10.389 "data_offset": 2048, 00:14:10.389 "data_size": 63488 00:14:10.389 }, 00:14:10.389 { 00:14:10.389 "name": "BaseBdev4", 00:14:10.389 "uuid": "ba111a2d-af5d-5883-9cf9-b50789a4f0b9", 00:14:10.389 "is_configured": true, 00:14:10.389 "data_offset": 2048, 00:14:10.389 "data_size": 63488 00:14:10.389 } 00:14:10.389 ] 00:14:10.389 }' 00:14:10.389 14:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.389 14:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.647 14:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:10.647 14:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.647 14:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:10.647 14:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:10.647 14:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.647 14:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.647 14:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.647 14:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.647 14:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.647 14:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.647 14:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.647 "name": "raid_bdev1", 00:14:10.647 "uuid": "363c8b0a-a75a-4086-8528-fea2db69a8b2", 00:14:10.647 "strip_size_kb": 0, 00:14:10.647 "state": "online", 00:14:10.647 "raid_level": "raid1", 00:14:10.647 "superblock": true, 00:14:10.647 "num_base_bdevs": 4, 00:14:10.647 "num_base_bdevs_discovered": 2, 00:14:10.647 "num_base_bdevs_operational": 2, 00:14:10.647 "base_bdevs_list": [ 00:14:10.647 { 00:14:10.647 "name": null, 00:14:10.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.647 "is_configured": false, 00:14:10.647 "data_offset": 0, 00:14:10.647 "data_size": 63488 00:14:10.647 }, 00:14:10.647 { 00:14:10.647 "name": null, 00:14:10.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.647 "is_configured": false, 00:14:10.647 "data_offset": 2048, 00:14:10.647 "data_size": 63488 00:14:10.647 }, 00:14:10.647 { 00:14:10.647 "name": "BaseBdev3", 00:14:10.647 "uuid": "bd28538d-522a-5c72-b137-26523eac7f7e", 00:14:10.647 "is_configured": true, 00:14:10.647 "data_offset": 2048, 00:14:10.648 "data_size": 63488 00:14:10.648 }, 00:14:10.648 { 00:14:10.648 "name": "BaseBdev4", 00:14:10.648 "uuid": "ba111a2d-af5d-5883-9cf9-b50789a4f0b9", 00:14:10.648 "is_configured": true, 00:14:10.648 "data_offset": 2048, 00:14:10.648 "data_size": 63488 00:14:10.648 } 00:14:10.648 ] 00:14:10.648 }' 00:14:10.648 14:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.648 14:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:10.648 14:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.648 14:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:10.648 14:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 79304 00:14:10.648 14:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 79304 ']' 00:14:10.648 14:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 79304 00:14:10.648 14:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:10.648 14:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:10.648 14:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79304 00:14:10.648 14:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:10.648 14:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:10.648 killing process with pid 79304 00:14:10.648 14:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79304' 00:14:10.648 14:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 79304 00:14:10.648 Received shutdown signal, test time was about 60.000000 seconds 00:14:10.648 00:14:10.648 Latency(us) 00:14:10.648 [2024-12-09T14:46:48.770Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.648 [2024-12-09T14:46:48.770Z] =================================================================================================================== 00:14:10.648 [2024-12-09T14:46:48.770Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:10.648 14:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 79304 00:14:10.648 [2024-12-09 14:46:48.765205] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:10.648 [2024-12-09 14:46:48.765366] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:10.648 [2024-12-09 14:46:48.765479] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:10.648 [2024-12-09 14:46:48.765530] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:11.214 [2024-12-09 14:46:49.269159] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:12.590 00:14:12.590 real 0m25.939s 00:14:12.590 user 0m31.699s 00:14:12.590 sys 0m3.929s 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:12.590 ************************************ 00:14:12.590 END TEST raid_rebuild_test_sb 00:14:12.590 ************************************ 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.590 14:46:50 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:14:12.590 14:46:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:12.590 14:46:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:12.590 14:46:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:12.590 ************************************ 00:14:12.590 START TEST raid_rebuild_test_io 00:14:12.590 ************************************ 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=80064 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 80064 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 80064 ']' 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:12.590 14:46:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.590 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:12.590 Zero copy mechanism will not be used. 00:14:12.590 [2024-12-09 14:46:50.562529] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:14:12.590 [2024-12-09 14:46:50.562667] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80064 ] 00:14:12.850 [2024-12-09 14:46:50.739329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.850 [2024-12-09 14:46:50.864758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.110 [2024-12-09 14:46:51.070252] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:13.110 [2024-12-09 14:46:51.070298] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:13.370 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:13.370 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:14:13.370 14:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:13.370 14:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:13.370 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.370 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.370 BaseBdev1_malloc 00:14:13.370 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.370 14:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:13.370 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.370 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.370 [2024-12-09 14:46:51.461249] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:13.370 [2024-12-09 14:46:51.461321] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.370 [2024-12-09 14:46:51.461346] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:13.370 [2024-12-09 14:46:51.461357] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.370 [2024-12-09 14:46:51.463541] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.370 [2024-12-09 14:46:51.463587] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:13.370 BaseBdev1 00:14:13.370 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.370 14:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:13.370 14:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:13.370 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.370 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.629 BaseBdev2_malloc 00:14:13.629 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.629 14:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:13.629 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.629 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.629 [2024-12-09 14:46:51.513971] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:13.629 [2024-12-09 14:46:51.514038] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.629 [2024-12-09 14:46:51.514064] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:13.629 [2024-12-09 14:46:51.514076] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.629 [2024-12-09 14:46:51.516330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.629 [2024-12-09 14:46:51.516374] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:13.629 BaseBdev2 00:14:13.629 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.629 14:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:13.629 14:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.630 BaseBdev3_malloc 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.630 [2024-12-09 14:46:51.588481] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:13.630 [2024-12-09 14:46:51.588620] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.630 [2024-12-09 14:46:51.588649] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:13.630 [2024-12-09 14:46:51.588663] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.630 [2024-12-09 14:46:51.591033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.630 [2024-12-09 14:46:51.591072] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:13.630 BaseBdev3 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.630 BaseBdev4_malloc 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.630 [2024-12-09 14:46:51.646306] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:13.630 [2024-12-09 14:46:51.646374] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.630 [2024-12-09 14:46:51.646397] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:13.630 [2024-12-09 14:46:51.646409] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.630 [2024-12-09 14:46:51.648692] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.630 [2024-12-09 14:46:51.648730] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:13.630 BaseBdev4 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.630 spare_malloc 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.630 spare_delay 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.630 [2024-12-09 14:46:51.715329] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:13.630 [2024-12-09 14:46:51.715464] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.630 [2024-12-09 14:46:51.715494] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:13.630 [2024-12-09 14:46:51.715510] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.630 [2024-12-09 14:46:51.717964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.630 [2024-12-09 14:46:51.718007] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:13.630 spare 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.630 [2024-12-09 14:46:51.727335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:13.630 [2024-12-09 14:46:51.729288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:13.630 [2024-12-09 14:46:51.729351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:13.630 [2024-12-09 14:46:51.729404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:13.630 [2024-12-09 14:46:51.729487] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:13.630 [2024-12-09 14:46:51.729499] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:13.630 [2024-12-09 14:46:51.729783] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:13.630 [2024-12-09 14:46:51.729958] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:13.630 [2024-12-09 14:46:51.730025] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:13.630 [2024-12-09 14:46:51.730213] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.630 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.890 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.890 14:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.890 "name": "raid_bdev1", 00:14:13.890 "uuid": "7d102088-5e31-44e0-ba6d-ac5e21722ad0", 00:14:13.890 "strip_size_kb": 0, 00:14:13.890 "state": "online", 00:14:13.890 "raid_level": "raid1", 00:14:13.890 "superblock": false, 00:14:13.890 "num_base_bdevs": 4, 00:14:13.890 "num_base_bdevs_discovered": 4, 00:14:13.890 "num_base_bdevs_operational": 4, 00:14:13.890 "base_bdevs_list": [ 00:14:13.890 { 00:14:13.890 "name": "BaseBdev1", 00:14:13.890 "uuid": "d2b82550-5558-55e4-9015-40201c38d18a", 00:14:13.890 "is_configured": true, 00:14:13.890 "data_offset": 0, 00:14:13.890 "data_size": 65536 00:14:13.890 }, 00:14:13.890 { 00:14:13.890 "name": "BaseBdev2", 00:14:13.890 "uuid": "7d8f771f-9fe5-5ac4-ae27-cf677f40fe53", 00:14:13.890 "is_configured": true, 00:14:13.890 "data_offset": 0, 00:14:13.890 "data_size": 65536 00:14:13.890 }, 00:14:13.890 { 00:14:13.890 "name": "BaseBdev3", 00:14:13.890 "uuid": "74aac4cb-56f6-564e-99dc-ee8040f68f55", 00:14:13.890 "is_configured": true, 00:14:13.890 "data_offset": 0, 00:14:13.890 "data_size": 65536 00:14:13.890 }, 00:14:13.890 { 00:14:13.890 "name": "BaseBdev4", 00:14:13.890 "uuid": "b795d3c9-153e-58b4-80b3-be38494c8184", 00:14:13.890 "is_configured": true, 00:14:13.890 "data_offset": 0, 00:14:13.890 "data_size": 65536 00:14:13.890 } 00:14:13.890 ] 00:14:13.890 }' 00:14:13.891 14:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.891 14:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.152 14:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:14.152 14:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:14.152 14:46:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.152 14:46:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.152 [2024-12-09 14:46:52.230977] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:14.152 14:46:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.152 14:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:14.415 14:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:14.415 14:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.415 14:46:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.415 14:46:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.415 14:46:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.415 14:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:14.415 14:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:14.415 14:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:14.415 14:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:14.415 14:46:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.415 14:46:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.415 [2024-12-09 14:46:52.326410] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:14.415 14:46:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.415 14:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:14.415 14:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.415 14:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.415 14:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:14.415 14:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:14.415 14:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:14.415 14:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.415 14:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.415 14:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.415 14:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.415 14:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.415 14:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.415 14:46:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.415 14:46:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.415 14:46:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.415 14:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.415 "name": "raid_bdev1", 00:14:14.415 "uuid": "7d102088-5e31-44e0-ba6d-ac5e21722ad0", 00:14:14.415 "strip_size_kb": 0, 00:14:14.415 "state": "online", 00:14:14.415 "raid_level": "raid1", 00:14:14.415 "superblock": false, 00:14:14.415 "num_base_bdevs": 4, 00:14:14.415 "num_base_bdevs_discovered": 3, 00:14:14.415 "num_base_bdevs_operational": 3, 00:14:14.415 "base_bdevs_list": [ 00:14:14.415 { 00:14:14.415 "name": null, 00:14:14.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.415 "is_configured": false, 00:14:14.415 "data_offset": 0, 00:14:14.415 "data_size": 65536 00:14:14.415 }, 00:14:14.415 { 00:14:14.415 "name": "BaseBdev2", 00:14:14.415 "uuid": "7d8f771f-9fe5-5ac4-ae27-cf677f40fe53", 00:14:14.415 "is_configured": true, 00:14:14.415 "data_offset": 0, 00:14:14.415 "data_size": 65536 00:14:14.415 }, 00:14:14.415 { 00:14:14.415 "name": "BaseBdev3", 00:14:14.415 "uuid": "74aac4cb-56f6-564e-99dc-ee8040f68f55", 00:14:14.415 "is_configured": true, 00:14:14.415 "data_offset": 0, 00:14:14.415 "data_size": 65536 00:14:14.415 }, 00:14:14.415 { 00:14:14.415 "name": "BaseBdev4", 00:14:14.415 "uuid": "b795d3c9-153e-58b4-80b3-be38494c8184", 00:14:14.415 "is_configured": true, 00:14:14.415 "data_offset": 0, 00:14:14.415 "data_size": 65536 00:14:14.415 } 00:14:14.415 ] 00:14:14.415 }' 00:14:14.415 14:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.415 14:46:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.415 [2024-12-09 14:46:52.446991] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:14.415 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:14.415 Zero copy mechanism will not be used. 00:14:14.415 Running I/O for 60 seconds... 00:14:14.675 14:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:14.675 14:46:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.675 14:46:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.675 [2024-12-09 14:46:52.774990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:14.935 14:46:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.935 14:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:14.935 [2024-12-09 14:46:52.842127] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:14.935 [2024-12-09 14:46:52.844282] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:14.935 [2024-12-09 14:46:52.953633] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:15.194 [2024-12-09 14:46:53.075608] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:15.194 [2024-12-09 14:46:53.076497] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:15.454 [2024-12-09 14:46:53.446929] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:15.714 142.00 IOPS, 426.00 MiB/s [2024-12-09T14:46:53.836Z] 14:46:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:15.714 14:46:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.714 14:46:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:15.714 14:46:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:15.714 14:46:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.973 14:46:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.973 14:46:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.973 14:46:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.973 14:46:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.973 14:46:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.973 14:46:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.973 "name": "raid_bdev1", 00:14:15.973 "uuid": "7d102088-5e31-44e0-ba6d-ac5e21722ad0", 00:14:15.973 "strip_size_kb": 0, 00:14:15.973 "state": "online", 00:14:15.973 "raid_level": "raid1", 00:14:15.973 "superblock": false, 00:14:15.973 "num_base_bdevs": 4, 00:14:15.973 "num_base_bdevs_discovered": 4, 00:14:15.973 "num_base_bdevs_operational": 4, 00:14:15.973 "process": { 00:14:15.973 "type": "rebuild", 00:14:15.973 "target": "spare", 00:14:15.973 "progress": { 00:14:15.973 "blocks": 14336, 00:14:15.973 "percent": 21 00:14:15.973 } 00:14:15.973 }, 00:14:15.973 "base_bdevs_list": [ 00:14:15.973 { 00:14:15.973 "name": "spare", 00:14:15.973 "uuid": "5bdd7b8a-6b67-5194-a56d-6d6641d584ea", 00:14:15.973 "is_configured": true, 00:14:15.973 "data_offset": 0, 00:14:15.973 "data_size": 65536 00:14:15.973 }, 00:14:15.973 { 00:14:15.973 "name": "BaseBdev2", 00:14:15.973 "uuid": "7d8f771f-9fe5-5ac4-ae27-cf677f40fe53", 00:14:15.973 "is_configured": true, 00:14:15.974 "data_offset": 0, 00:14:15.974 "data_size": 65536 00:14:15.974 }, 00:14:15.974 { 00:14:15.974 "name": "BaseBdev3", 00:14:15.974 "uuid": "74aac4cb-56f6-564e-99dc-ee8040f68f55", 00:14:15.974 "is_configured": true, 00:14:15.974 "data_offset": 0, 00:14:15.974 "data_size": 65536 00:14:15.974 }, 00:14:15.974 { 00:14:15.974 "name": "BaseBdev4", 00:14:15.974 "uuid": "b795d3c9-153e-58b4-80b3-be38494c8184", 00:14:15.974 "is_configured": true, 00:14:15.974 "data_offset": 0, 00:14:15.974 "data_size": 65536 00:14:15.974 } 00:14:15.974 ] 00:14:15.974 }' 00:14:15.974 14:46:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.974 [2024-12-09 14:46:53.927656] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:15.974 14:46:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:15.974 14:46:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.974 14:46:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:15.974 14:46:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:15.974 14:46:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.974 14:46:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.974 [2024-12-09 14:46:53.994956] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:15.974 [2024-12-09 14:46:54.039156] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:16.234 [2024-12-09 14:46:54.150375] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:16.234 [2024-12-09 14:46:54.155876] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.234 [2024-12-09 14:46:54.155996] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:16.234 [2024-12-09 14:46:54.156021] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:16.234 [2024-12-09 14:46:54.192270] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:16.234 14:46:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.234 14:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:16.234 14:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.234 14:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.234 14:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:16.234 14:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:16.234 14:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:16.234 14:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.234 14:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.234 14:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.234 14:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.234 14:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.234 14:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.234 14:46:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.234 14:46:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.234 14:46:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.234 14:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.234 "name": "raid_bdev1", 00:14:16.234 "uuid": "7d102088-5e31-44e0-ba6d-ac5e21722ad0", 00:14:16.234 "strip_size_kb": 0, 00:14:16.234 "state": "online", 00:14:16.234 "raid_level": "raid1", 00:14:16.234 "superblock": false, 00:14:16.234 "num_base_bdevs": 4, 00:14:16.234 "num_base_bdevs_discovered": 3, 00:14:16.234 "num_base_bdevs_operational": 3, 00:14:16.234 "base_bdevs_list": [ 00:14:16.234 { 00:14:16.234 "name": null, 00:14:16.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.234 "is_configured": false, 00:14:16.234 "data_offset": 0, 00:14:16.234 "data_size": 65536 00:14:16.234 }, 00:14:16.234 { 00:14:16.234 "name": "BaseBdev2", 00:14:16.234 "uuid": "7d8f771f-9fe5-5ac4-ae27-cf677f40fe53", 00:14:16.234 "is_configured": true, 00:14:16.234 "data_offset": 0, 00:14:16.234 "data_size": 65536 00:14:16.234 }, 00:14:16.234 { 00:14:16.234 "name": "BaseBdev3", 00:14:16.234 "uuid": "74aac4cb-56f6-564e-99dc-ee8040f68f55", 00:14:16.234 "is_configured": true, 00:14:16.234 "data_offset": 0, 00:14:16.234 "data_size": 65536 00:14:16.234 }, 00:14:16.234 { 00:14:16.234 "name": "BaseBdev4", 00:14:16.234 "uuid": "b795d3c9-153e-58b4-80b3-be38494c8184", 00:14:16.234 "is_configured": true, 00:14:16.234 "data_offset": 0, 00:14:16.234 "data_size": 65536 00:14:16.234 } 00:14:16.234 ] 00:14:16.234 }' 00:14:16.234 14:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.234 14:46:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.753 130.00 IOPS, 390.00 MiB/s [2024-12-09T14:46:54.875Z] 14:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:16.753 14:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.753 14:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:16.753 14:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:16.753 14:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.753 14:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.753 14:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.753 14:46:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.753 14:46:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.753 14:46:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.753 14:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.753 "name": "raid_bdev1", 00:14:16.753 "uuid": "7d102088-5e31-44e0-ba6d-ac5e21722ad0", 00:14:16.753 "strip_size_kb": 0, 00:14:16.753 "state": "online", 00:14:16.753 "raid_level": "raid1", 00:14:16.753 "superblock": false, 00:14:16.753 "num_base_bdevs": 4, 00:14:16.753 "num_base_bdevs_discovered": 3, 00:14:16.753 "num_base_bdevs_operational": 3, 00:14:16.753 "base_bdevs_list": [ 00:14:16.753 { 00:14:16.753 "name": null, 00:14:16.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.753 "is_configured": false, 00:14:16.753 "data_offset": 0, 00:14:16.753 "data_size": 65536 00:14:16.753 }, 00:14:16.753 { 00:14:16.753 "name": "BaseBdev2", 00:14:16.753 "uuid": "7d8f771f-9fe5-5ac4-ae27-cf677f40fe53", 00:14:16.753 "is_configured": true, 00:14:16.753 "data_offset": 0, 00:14:16.753 "data_size": 65536 00:14:16.753 }, 00:14:16.753 { 00:14:16.753 "name": "BaseBdev3", 00:14:16.753 "uuid": "74aac4cb-56f6-564e-99dc-ee8040f68f55", 00:14:16.753 "is_configured": true, 00:14:16.753 "data_offset": 0, 00:14:16.753 "data_size": 65536 00:14:16.753 }, 00:14:16.753 { 00:14:16.753 "name": "BaseBdev4", 00:14:16.753 "uuid": "b795d3c9-153e-58b4-80b3-be38494c8184", 00:14:16.753 "is_configured": true, 00:14:16.753 "data_offset": 0, 00:14:16.753 "data_size": 65536 00:14:16.753 } 00:14:16.753 ] 00:14:16.753 }' 00:14:16.753 14:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.753 14:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:16.753 14:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.753 14:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:16.753 14:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:16.753 14:46:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.753 14:46:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.753 [2024-12-09 14:46:54.815019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:16.753 14:46:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.753 14:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:16.753 [2024-12-09 14:46:54.853208] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:16.753 [2024-12-09 14:46:54.855383] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:17.012 [2024-12-09 14:46:54.972971] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:17.012 [2024-12-09 14:46:54.974464] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:17.275 [2024-12-09 14:46:55.193415] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:17.275 [2024-12-09 14:46:55.193864] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:17.536 146.00 IOPS, 438.00 MiB/s [2024-12-09T14:46:55.658Z] [2024-12-09 14:46:55.564017] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:17.795 [2024-12-09 14:46:55.780953] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:17.795 [2024-12-09 14:46:55.781692] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:17.795 14:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:17.795 14:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.795 14:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:17.795 14:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:17.795 14:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.795 14:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.795 14:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.795 14:46:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.795 14:46:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.795 14:46:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.795 14:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.795 "name": "raid_bdev1", 00:14:17.795 "uuid": "7d102088-5e31-44e0-ba6d-ac5e21722ad0", 00:14:17.795 "strip_size_kb": 0, 00:14:17.795 "state": "online", 00:14:17.795 "raid_level": "raid1", 00:14:17.795 "superblock": false, 00:14:17.795 "num_base_bdevs": 4, 00:14:17.795 "num_base_bdevs_discovered": 4, 00:14:17.795 "num_base_bdevs_operational": 4, 00:14:17.795 "process": { 00:14:17.795 "type": "rebuild", 00:14:17.795 "target": "spare", 00:14:17.795 "progress": { 00:14:17.795 "blocks": 14336, 00:14:17.795 "percent": 21 00:14:17.795 } 00:14:17.795 }, 00:14:17.795 "base_bdevs_list": [ 00:14:17.795 { 00:14:17.795 "name": "spare", 00:14:17.795 "uuid": "5bdd7b8a-6b67-5194-a56d-6d6641d584ea", 00:14:17.795 "is_configured": true, 00:14:17.795 "data_offset": 0, 00:14:17.795 "data_size": 65536 00:14:17.795 }, 00:14:17.795 { 00:14:17.795 "name": "BaseBdev2", 00:14:17.795 "uuid": "7d8f771f-9fe5-5ac4-ae27-cf677f40fe53", 00:14:17.795 "is_configured": true, 00:14:17.795 "data_offset": 0, 00:14:17.795 "data_size": 65536 00:14:17.795 }, 00:14:17.795 { 00:14:17.795 "name": "BaseBdev3", 00:14:17.795 "uuid": "74aac4cb-56f6-564e-99dc-ee8040f68f55", 00:14:17.795 "is_configured": true, 00:14:17.795 "data_offset": 0, 00:14:17.795 "data_size": 65536 00:14:17.795 }, 00:14:17.795 { 00:14:17.795 "name": "BaseBdev4", 00:14:17.795 "uuid": "b795d3c9-153e-58b4-80b3-be38494c8184", 00:14:17.795 "is_configured": true, 00:14:17.795 "data_offset": 0, 00:14:17.795 "data_size": 65536 00:14:17.795 } 00:14:17.795 ] 00:14:17.795 }' 00:14:17.795 14:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.055 14:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:18.055 14:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.055 [2024-12-09 14:46:55.993653] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:18.055 [2024-12-09 14:46:55.994042] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:18.055 14:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:18.055 14:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:18.055 14:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:18.055 14:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:18.055 14:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:18.055 14:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:18.055 14:46:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.055 14:46:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.055 [2024-12-09 14:46:56.005287] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:18.315 [2024-12-09 14:46:56.314492] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:18.315 [2024-12-09 14:46:56.314545] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:18.315 14:46:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.315 14:46:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:18.315 14:46:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:18.315 14:46:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:18.315 14:46:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.315 14:46:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:18.315 14:46:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:18.315 14:46:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.315 14:46:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.315 14:46:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.315 14:46:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.315 14:46:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.315 14:46:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.315 14:46:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.315 "name": "raid_bdev1", 00:14:18.315 "uuid": "7d102088-5e31-44e0-ba6d-ac5e21722ad0", 00:14:18.315 "strip_size_kb": 0, 00:14:18.315 "state": "online", 00:14:18.315 "raid_level": "raid1", 00:14:18.315 "superblock": false, 00:14:18.315 "num_base_bdevs": 4, 00:14:18.315 "num_base_bdevs_discovered": 3, 00:14:18.315 "num_base_bdevs_operational": 3, 00:14:18.315 "process": { 00:14:18.315 "type": "rebuild", 00:14:18.315 "target": "spare", 00:14:18.315 "progress": { 00:14:18.315 "blocks": 18432, 00:14:18.315 "percent": 28 00:14:18.315 } 00:14:18.315 }, 00:14:18.315 "base_bdevs_list": [ 00:14:18.315 { 00:14:18.315 "name": "spare", 00:14:18.315 "uuid": "5bdd7b8a-6b67-5194-a56d-6d6641d584ea", 00:14:18.315 "is_configured": true, 00:14:18.315 "data_offset": 0, 00:14:18.315 "data_size": 65536 00:14:18.315 }, 00:14:18.315 { 00:14:18.315 "name": null, 00:14:18.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.315 "is_configured": false, 00:14:18.315 "data_offset": 0, 00:14:18.315 "data_size": 65536 00:14:18.315 }, 00:14:18.315 { 00:14:18.315 "name": "BaseBdev3", 00:14:18.315 "uuid": "74aac4cb-56f6-564e-99dc-ee8040f68f55", 00:14:18.315 "is_configured": true, 00:14:18.315 "data_offset": 0, 00:14:18.315 "data_size": 65536 00:14:18.315 }, 00:14:18.315 { 00:14:18.315 "name": "BaseBdev4", 00:14:18.315 "uuid": "b795d3c9-153e-58b4-80b3-be38494c8184", 00:14:18.315 "is_configured": true, 00:14:18.315 "data_offset": 0, 00:14:18.315 "data_size": 65536 00:14:18.315 } 00:14:18.315 ] 00:14:18.315 }' 00:14:18.315 14:46:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.315 14:46:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:18.315 14:46:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.315 [2024-12-09 14:46:56.434369] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:18.577 126.25 IOPS, 378.75 MiB/s [2024-12-09T14:46:56.699Z] 14:46:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:18.577 14:46:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=488 00:14:18.577 14:46:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:18.577 14:46:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:18.577 14:46:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.577 14:46:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:18.577 14:46:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:18.577 14:46:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.577 14:46:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.577 14:46:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.577 14:46:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.577 14:46:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.577 14:46:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.577 14:46:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.577 "name": "raid_bdev1", 00:14:18.577 "uuid": "7d102088-5e31-44e0-ba6d-ac5e21722ad0", 00:14:18.577 "strip_size_kb": 0, 00:14:18.577 "state": "online", 00:14:18.577 "raid_level": "raid1", 00:14:18.577 "superblock": false, 00:14:18.577 "num_base_bdevs": 4, 00:14:18.577 "num_base_bdevs_discovered": 3, 00:14:18.577 "num_base_bdevs_operational": 3, 00:14:18.577 "process": { 00:14:18.577 "type": "rebuild", 00:14:18.577 "target": "spare", 00:14:18.577 "progress": { 00:14:18.577 "blocks": 20480, 00:14:18.577 "percent": 31 00:14:18.577 } 00:14:18.577 }, 00:14:18.577 "base_bdevs_list": [ 00:14:18.577 { 00:14:18.577 "name": "spare", 00:14:18.577 "uuid": "5bdd7b8a-6b67-5194-a56d-6d6641d584ea", 00:14:18.577 "is_configured": true, 00:14:18.577 "data_offset": 0, 00:14:18.577 "data_size": 65536 00:14:18.577 }, 00:14:18.577 { 00:14:18.577 "name": null, 00:14:18.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.577 "is_configured": false, 00:14:18.577 "data_offset": 0, 00:14:18.577 "data_size": 65536 00:14:18.577 }, 00:14:18.577 { 00:14:18.577 "name": "BaseBdev3", 00:14:18.577 "uuid": "74aac4cb-56f6-564e-99dc-ee8040f68f55", 00:14:18.577 "is_configured": true, 00:14:18.577 "data_offset": 0, 00:14:18.577 "data_size": 65536 00:14:18.577 }, 00:14:18.577 { 00:14:18.577 "name": "BaseBdev4", 00:14:18.577 "uuid": "b795d3c9-153e-58b4-80b3-be38494c8184", 00:14:18.577 "is_configured": true, 00:14:18.577 "data_offset": 0, 00:14:18.577 "data_size": 65536 00:14:18.577 } 00:14:18.577 ] 00:14:18.577 }' 00:14:18.577 14:46:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.577 14:46:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:18.577 14:46:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.577 14:46:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:18.577 14:46:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:18.577 [2024-12-09 14:46:56.644140] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:18.838 [2024-12-09 14:46:56.867852] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:19.097 [2024-12-09 14:46:56.989566] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:19.356 [2024-12-09 14:46:57.343104] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:19.356 [2024-12-09 14:46:57.343568] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:19.615 111.20 IOPS, 333.60 MiB/s [2024-12-09T14:46:57.737Z] 14:46:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:19.615 14:46:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:19.615 14:46:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.615 14:46:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:19.615 14:46:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:19.615 14:46:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.615 14:46:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.615 14:46:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.615 14:46:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.615 14:46:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.615 14:46:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.615 14:46:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.615 "name": "raid_bdev1", 00:14:19.615 "uuid": "7d102088-5e31-44e0-ba6d-ac5e21722ad0", 00:14:19.615 "strip_size_kb": 0, 00:14:19.615 "state": "online", 00:14:19.615 "raid_level": "raid1", 00:14:19.615 "superblock": false, 00:14:19.615 "num_base_bdevs": 4, 00:14:19.615 "num_base_bdevs_discovered": 3, 00:14:19.615 "num_base_bdevs_operational": 3, 00:14:19.615 "process": { 00:14:19.615 "type": "rebuild", 00:14:19.615 "target": "spare", 00:14:19.615 "progress": { 00:14:19.615 "blocks": 36864, 00:14:19.615 "percent": 56 00:14:19.615 } 00:14:19.615 }, 00:14:19.615 "base_bdevs_list": [ 00:14:19.615 { 00:14:19.615 "name": "spare", 00:14:19.615 "uuid": "5bdd7b8a-6b67-5194-a56d-6d6641d584ea", 00:14:19.615 "is_configured": true, 00:14:19.615 "data_offset": 0, 00:14:19.615 "data_size": 65536 00:14:19.615 }, 00:14:19.615 { 00:14:19.615 "name": null, 00:14:19.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.615 "is_configured": false, 00:14:19.615 "data_offset": 0, 00:14:19.615 "data_size": 65536 00:14:19.615 }, 00:14:19.615 { 00:14:19.615 "name": "BaseBdev3", 00:14:19.615 "uuid": "74aac4cb-56f6-564e-99dc-ee8040f68f55", 00:14:19.615 "is_configured": true, 00:14:19.615 "data_offset": 0, 00:14:19.615 "data_size": 65536 00:14:19.615 }, 00:14:19.615 { 00:14:19.615 "name": "BaseBdev4", 00:14:19.615 "uuid": "b795d3c9-153e-58b4-80b3-be38494c8184", 00:14:19.615 "is_configured": true, 00:14:19.615 "data_offset": 0, 00:14:19.615 "data_size": 65536 00:14:19.615 } 00:14:19.615 ] 00:14:19.615 }' 00:14:19.615 14:46:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.615 [2024-12-09 14:46:57.672105] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:19.615 [2024-12-09 14:46:57.672711] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:19.615 14:46:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:19.615 14:46:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.875 14:46:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:19.875 14:46:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:19.875 [2024-12-09 14:46:57.796034] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:20.134 [2024-12-09 14:46:58.144545] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:20.653 99.83 IOPS, 299.50 MiB/s [2024-12-09T14:46:58.775Z] [2024-12-09 14:46:58.727907] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:14:20.653 14:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:20.653 14:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:20.653 14:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.653 14:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:20.653 14:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:20.653 14:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.653 14:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.653 14:46:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.653 14:46:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.653 14:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.913 14:46:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.913 14:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.913 "name": "raid_bdev1", 00:14:20.913 "uuid": "7d102088-5e31-44e0-ba6d-ac5e21722ad0", 00:14:20.913 "strip_size_kb": 0, 00:14:20.913 "state": "online", 00:14:20.913 "raid_level": "raid1", 00:14:20.913 "superblock": false, 00:14:20.913 "num_base_bdevs": 4, 00:14:20.913 "num_base_bdevs_discovered": 3, 00:14:20.913 "num_base_bdevs_operational": 3, 00:14:20.913 "process": { 00:14:20.913 "type": "rebuild", 00:14:20.913 "target": "spare", 00:14:20.913 "progress": { 00:14:20.913 "blocks": 53248, 00:14:20.913 "percent": 81 00:14:20.913 } 00:14:20.913 }, 00:14:20.913 "base_bdevs_list": [ 00:14:20.913 { 00:14:20.913 "name": "spare", 00:14:20.913 "uuid": "5bdd7b8a-6b67-5194-a56d-6d6641d584ea", 00:14:20.913 "is_configured": true, 00:14:20.913 "data_offset": 0, 00:14:20.913 "data_size": 65536 00:14:20.913 }, 00:14:20.913 { 00:14:20.913 "name": null, 00:14:20.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.913 "is_configured": false, 00:14:20.913 "data_offset": 0, 00:14:20.913 "data_size": 65536 00:14:20.913 }, 00:14:20.913 { 00:14:20.913 "name": "BaseBdev3", 00:14:20.913 "uuid": "74aac4cb-56f6-564e-99dc-ee8040f68f55", 00:14:20.913 "is_configured": true, 00:14:20.913 "data_offset": 0, 00:14:20.913 "data_size": 65536 00:14:20.913 }, 00:14:20.913 { 00:14:20.913 "name": "BaseBdev4", 00:14:20.913 "uuid": "b795d3c9-153e-58b4-80b3-be38494c8184", 00:14:20.913 "is_configured": true, 00:14:20.913 "data_offset": 0, 00:14:20.913 "data_size": 65536 00:14:20.913 } 00:14:20.913 ] 00:14:20.913 }' 00:14:20.913 14:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.913 14:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:20.913 14:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.913 14:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:20.913 14:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:20.914 [2024-12-09 14:46:58.957384] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:14:21.482 93.00 IOPS, 279.00 MiB/s [2024-12-09T14:46:59.604Z] [2024-12-09 14:46:59.462493] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:21.482 [2024-12-09 14:46:59.550807] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:21.482 [2024-12-09 14:46:59.554093] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.050 14:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:22.050 14:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.050 14:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.050 14:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.050 14:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.050 14:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.050 14:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.050 14:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.050 14:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.050 14:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.050 14:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.050 14:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.050 "name": "raid_bdev1", 00:14:22.050 "uuid": "7d102088-5e31-44e0-ba6d-ac5e21722ad0", 00:14:22.050 "strip_size_kb": 0, 00:14:22.050 "state": "online", 00:14:22.050 "raid_level": "raid1", 00:14:22.050 "superblock": false, 00:14:22.050 "num_base_bdevs": 4, 00:14:22.050 "num_base_bdevs_discovered": 3, 00:14:22.050 "num_base_bdevs_operational": 3, 00:14:22.050 "base_bdevs_list": [ 00:14:22.050 { 00:14:22.050 "name": "spare", 00:14:22.050 "uuid": "5bdd7b8a-6b67-5194-a56d-6d6641d584ea", 00:14:22.050 "is_configured": true, 00:14:22.050 "data_offset": 0, 00:14:22.050 "data_size": 65536 00:14:22.050 }, 00:14:22.050 { 00:14:22.050 "name": null, 00:14:22.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.050 "is_configured": false, 00:14:22.050 "data_offset": 0, 00:14:22.050 "data_size": 65536 00:14:22.050 }, 00:14:22.050 { 00:14:22.050 "name": "BaseBdev3", 00:14:22.050 "uuid": "74aac4cb-56f6-564e-99dc-ee8040f68f55", 00:14:22.050 "is_configured": true, 00:14:22.050 "data_offset": 0, 00:14:22.050 "data_size": 65536 00:14:22.050 }, 00:14:22.050 { 00:14:22.050 "name": "BaseBdev4", 00:14:22.050 "uuid": "b795d3c9-153e-58b4-80b3-be38494c8184", 00:14:22.050 "is_configured": true, 00:14:22.050 "data_offset": 0, 00:14:22.050 "data_size": 65536 00:14:22.050 } 00:14:22.050 ] 00:14:22.050 }' 00:14:22.050 14:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.050 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:22.050 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.050 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:22.050 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:22.050 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:22.050 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.050 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:22.050 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:22.050 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.050 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.050 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.050 14:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.050 14:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.050 14:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.050 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.050 "name": "raid_bdev1", 00:14:22.050 "uuid": "7d102088-5e31-44e0-ba6d-ac5e21722ad0", 00:14:22.050 "strip_size_kb": 0, 00:14:22.050 "state": "online", 00:14:22.050 "raid_level": "raid1", 00:14:22.050 "superblock": false, 00:14:22.050 "num_base_bdevs": 4, 00:14:22.050 "num_base_bdevs_discovered": 3, 00:14:22.050 "num_base_bdevs_operational": 3, 00:14:22.050 "base_bdevs_list": [ 00:14:22.050 { 00:14:22.050 "name": "spare", 00:14:22.050 "uuid": "5bdd7b8a-6b67-5194-a56d-6d6641d584ea", 00:14:22.050 "is_configured": true, 00:14:22.050 "data_offset": 0, 00:14:22.050 "data_size": 65536 00:14:22.050 }, 00:14:22.050 { 00:14:22.050 "name": null, 00:14:22.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.050 "is_configured": false, 00:14:22.050 "data_offset": 0, 00:14:22.050 "data_size": 65536 00:14:22.050 }, 00:14:22.050 { 00:14:22.050 "name": "BaseBdev3", 00:14:22.050 "uuid": "74aac4cb-56f6-564e-99dc-ee8040f68f55", 00:14:22.050 "is_configured": true, 00:14:22.050 "data_offset": 0, 00:14:22.050 "data_size": 65536 00:14:22.050 }, 00:14:22.050 { 00:14:22.050 "name": "BaseBdev4", 00:14:22.050 "uuid": "b795d3c9-153e-58b4-80b3-be38494c8184", 00:14:22.050 "is_configured": true, 00:14:22.050 "data_offset": 0, 00:14:22.050 "data_size": 65536 00:14:22.050 } 00:14:22.050 ] 00:14:22.050 }' 00:14:22.050 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.050 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:22.050 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.050 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:22.050 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:22.050 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:22.050 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:22.050 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:22.050 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:22.050 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:22.050 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.050 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.050 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.050 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.050 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.050 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.050 14:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.051 14:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.310 14:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.310 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.310 "name": "raid_bdev1", 00:14:22.310 "uuid": "7d102088-5e31-44e0-ba6d-ac5e21722ad0", 00:14:22.310 "strip_size_kb": 0, 00:14:22.310 "state": "online", 00:14:22.310 "raid_level": "raid1", 00:14:22.310 "superblock": false, 00:14:22.310 "num_base_bdevs": 4, 00:14:22.310 "num_base_bdevs_discovered": 3, 00:14:22.310 "num_base_bdevs_operational": 3, 00:14:22.310 "base_bdevs_list": [ 00:14:22.310 { 00:14:22.310 "name": "spare", 00:14:22.310 "uuid": "5bdd7b8a-6b67-5194-a56d-6d6641d584ea", 00:14:22.310 "is_configured": true, 00:14:22.310 "data_offset": 0, 00:14:22.310 "data_size": 65536 00:14:22.310 }, 00:14:22.310 { 00:14:22.310 "name": null, 00:14:22.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.310 "is_configured": false, 00:14:22.310 "data_offset": 0, 00:14:22.310 "data_size": 65536 00:14:22.310 }, 00:14:22.310 { 00:14:22.310 "name": "BaseBdev3", 00:14:22.310 "uuid": "74aac4cb-56f6-564e-99dc-ee8040f68f55", 00:14:22.310 "is_configured": true, 00:14:22.310 "data_offset": 0, 00:14:22.310 "data_size": 65536 00:14:22.310 }, 00:14:22.310 { 00:14:22.310 "name": "BaseBdev4", 00:14:22.310 "uuid": "b795d3c9-153e-58b4-80b3-be38494c8184", 00:14:22.310 "is_configured": true, 00:14:22.310 "data_offset": 0, 00:14:22.310 "data_size": 65536 00:14:22.310 } 00:14:22.310 ] 00:14:22.310 }' 00:14:22.310 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.310 14:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.570 84.88 IOPS, 254.62 MiB/s [2024-12-09T14:47:00.692Z] 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:22.570 14:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.570 14:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.570 [2024-12-09 14:47:00.587425] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:22.570 [2024-12-09 14:47:00.587520] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:22.570 00:14:22.570 Latency(us) 00:14:22.570 [2024-12-09T14:47:00.692Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.570 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:22.570 raid_bdev1 : 8.22 83.36 250.07 0.00 0.00 16516.91 313.01 113099.68 00:14:22.570 [2024-12-09T14:47:00.692Z] =================================================================================================================== 00:14:22.570 [2024-12-09T14:47:00.692Z] Total : 83.36 250.07 0.00 0.00 16516.91 313.01 113099.68 00:14:22.570 [2024-12-09 14:47:00.672525] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:22.570 [2024-12-09 14:47:00.672612] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.570 [2024-12-09 14:47:00.672714] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:22.570 [2024-12-09 14:47:00.672724] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:22.570 { 00:14:22.570 "results": [ 00:14:22.570 { 00:14:22.570 "job": "raid_bdev1", 00:14:22.570 "core_mask": "0x1", 00:14:22.570 "workload": "randrw", 00:14:22.570 "percentage": 50, 00:14:22.570 "status": "finished", 00:14:22.570 "queue_depth": 2, 00:14:22.570 "io_size": 3145728, 00:14:22.570 "runtime": 8.217706, 00:14:22.570 "iops": 83.35659611088545, 00:14:22.570 "mibps": 250.06978833265634, 00:14:22.570 "io_failed": 0, 00:14:22.570 "io_timeout": 0, 00:14:22.570 "avg_latency_us": 16516.912086188764, 00:14:22.570 "min_latency_us": 313.0131004366812, 00:14:22.570 "max_latency_us": 113099.68209606987 00:14:22.570 } 00:14:22.570 ], 00:14:22.570 "core_count": 1 00:14:22.570 } 00:14:22.570 14:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.570 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.570 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:22.570 14:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.570 14:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.830 14:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.830 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:22.830 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:22.830 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:22.830 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:22.830 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:22.830 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:22.830 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:22.830 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:22.830 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:22.830 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:22.830 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:22.830 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:22.830 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:22.830 /dev/nbd0 00:14:23.090 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:23.090 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:23.090 14:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:23.090 14:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:23.090 14:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:23.090 14:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:23.090 14:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:23.090 14:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:23.090 14:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:23.090 14:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:23.090 14:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:23.090 1+0 records in 00:14:23.090 1+0 records out 00:14:23.090 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000577387 s, 7.1 MB/s 00:14:23.090 14:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.090 14:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:23.090 14:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.090 14:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:23.090 14:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:23.090 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:23.090 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:23.090 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:23.090 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:23.090 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:23.090 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:23.090 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:23.090 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:23.090 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:23.090 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:23.090 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:23.090 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:23.090 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:23.090 14:47:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:23.090 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:23.090 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:23.090 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:23.350 /dev/nbd1 00:14:23.350 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:23.350 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:23.350 14:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:23.350 14:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:23.350 14:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:23.350 14:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:23.350 14:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:23.350 14:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:23.350 14:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:23.350 14:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:23.350 14:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:23.350 1+0 records in 00:14:23.350 1+0 records out 00:14:23.350 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262798 s, 15.6 MB/s 00:14:23.350 14:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.350 14:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:23.350 14:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.350 14:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:23.350 14:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:23.350 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:23.350 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:23.350 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:23.350 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:23.351 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:23.351 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:23.351 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:23.351 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:23.351 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:23.351 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:23.611 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:23.611 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:23.611 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:23.611 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:23.611 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:23.611 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:23.611 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:23.611 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:23.611 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:23.611 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:23.611 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:23.611 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:23.611 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:23.611 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:23.611 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:23.611 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:23.611 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:23.611 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:23.611 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:23.611 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:23.871 /dev/nbd1 00:14:23.871 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:23.871 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:23.871 14:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:23.871 14:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:23.871 14:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:23.871 14:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:23.871 14:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:23.871 14:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:23.872 14:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:23.872 14:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:23.872 14:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:23.872 1+0 records in 00:14:23.872 1+0 records out 00:14:23.872 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381154 s, 10.7 MB/s 00:14:23.872 14:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.872 14:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:23.872 14:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.872 14:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:23.872 14:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:23.872 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:23.872 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:23.872 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:23.872 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:23.872 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:23.872 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:23.872 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:23.872 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:23.872 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:23.872 14:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:24.131 14:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:24.131 14:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:24.132 14:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:24.132 14:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:24.132 14:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:24.132 14:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:24.132 14:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:24.132 14:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:24.132 14:47:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:24.132 14:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:24.132 14:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:24.132 14:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:24.132 14:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:24.132 14:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:24.132 14:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:24.391 14:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:24.391 14:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:24.391 14:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:24.391 14:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:24.391 14:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:24.391 14:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:24.391 14:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:24.391 14:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:24.391 14:47:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:24.391 14:47:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 80064 00:14:24.391 14:47:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 80064 ']' 00:14:24.391 14:47:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 80064 00:14:24.391 14:47:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:14:24.391 14:47:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:24.391 14:47:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80064 00:14:24.391 14:47:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:24.391 14:47:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:24.391 killing process with pid 80064 00:14:24.391 Received shutdown signal, test time was about 10.060383 seconds 00:14:24.391 00:14:24.391 Latency(us) 00:14:24.391 [2024-12-09T14:47:02.513Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:24.391 [2024-12-09T14:47:02.513Z] =================================================================================================================== 00:14:24.391 [2024-12-09T14:47:02.513Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:24.391 14:47:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80064' 00:14:24.391 14:47:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 80064 00:14:24.391 [2024-12-09 14:47:02.490374] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:24.391 14:47:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 80064 00:14:24.976 [2024-12-09 14:47:02.919003] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:26.355 ************************************ 00:14:26.355 END TEST raid_rebuild_test_io 00:14:26.355 ************************************ 00:14:26.355 14:47:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:26.355 00:14:26.355 real 0m13.635s 00:14:26.355 user 0m17.286s 00:14:26.355 sys 0m1.804s 00:14:26.355 14:47:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:26.355 14:47:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.355 14:47:04 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:14:26.355 14:47:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:26.355 14:47:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:26.355 14:47:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:26.355 ************************************ 00:14:26.355 START TEST raid_rebuild_test_sb_io 00:14:26.355 ************************************ 00:14:26.355 14:47:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:14:26.355 14:47:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:26.355 14:47:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:26.355 14:47:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:26.355 14:47:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:26.355 14:47:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:26.355 14:47:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:26.355 14:47:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:26.355 14:47:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:26.355 14:47:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:26.355 14:47:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:26.355 14:47:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:26.355 14:47:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:26.355 14:47:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:26.355 14:47:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:26.355 14:47:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:26.355 14:47:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:26.355 14:47:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:26.355 14:47:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:26.356 14:47:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:26.356 14:47:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:26.356 14:47:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:26.356 14:47:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:26.356 14:47:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:26.356 14:47:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:26.356 14:47:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:26.356 14:47:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:26.356 14:47:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:26.356 14:47:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:26.356 14:47:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:26.356 14:47:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:26.356 14:47:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=80473 00:14:26.356 14:47:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:26.356 14:47:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 80473 00:14:26.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.356 14:47:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 80473 ']' 00:14:26.356 14:47:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.356 14:47:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:26.356 14:47:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.356 14:47:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:26.356 14:47:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.356 [2024-12-09 14:47:04.278019] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:14:26.356 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:26.356 Zero copy mechanism will not be used. 00:14:26.356 [2024-12-09 14:47:04.278600] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80473 ] 00:14:26.356 [2024-12-09 14:47:04.450754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.614 [2024-12-09 14:47:04.577947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.873 [2024-12-09 14:47:04.779850] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:26.873 [2024-12-09 14:47:04.779917] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:27.135 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:27.135 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:14:27.135 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:27.135 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:27.135 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.135 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.135 BaseBdev1_malloc 00:14:27.135 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.135 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:27.135 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.135 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.135 [2024-12-09 14:47:05.145652] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:27.135 [2024-12-09 14:47:05.145712] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.135 [2024-12-09 14:47:05.145735] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:27.135 [2024-12-09 14:47:05.145746] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.135 [2024-12-09 14:47:05.147810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.135 [2024-12-09 14:47:05.147850] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:27.135 BaseBdev1 00:14:27.135 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.135 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:27.135 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:27.135 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.135 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.135 BaseBdev2_malloc 00:14:27.135 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.135 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:27.135 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.135 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.135 [2024-12-09 14:47:05.200268] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:27.135 [2024-12-09 14:47:05.200327] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.135 [2024-12-09 14:47:05.200350] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:27.135 [2024-12-09 14:47:05.200361] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.135 [2024-12-09 14:47:05.202426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.135 [2024-12-09 14:47:05.202463] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:27.135 BaseBdev2 00:14:27.135 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.135 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:27.135 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:27.135 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.135 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.396 BaseBdev3_malloc 00:14:27.396 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.396 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:27.396 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.396 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.396 [2024-12-09 14:47:05.267103] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:27.396 [2024-12-09 14:47:05.267206] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.396 [2024-12-09 14:47:05.267234] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:27.396 [2024-12-09 14:47:05.267246] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.396 [2024-12-09 14:47:05.269392] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.396 [2024-12-09 14:47:05.269432] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:27.396 BaseBdev3 00:14:27.396 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.396 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:27.396 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:27.396 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.396 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.396 BaseBdev4_malloc 00:14:27.396 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.396 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:27.396 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.396 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.396 [2024-12-09 14:47:05.321534] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:27.396 [2024-12-09 14:47:05.321603] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.396 [2024-12-09 14:47:05.321625] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:27.396 [2024-12-09 14:47:05.321636] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.396 [2024-12-09 14:47:05.323741] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.396 [2024-12-09 14:47:05.323840] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:27.396 BaseBdev4 00:14:27.396 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.396 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:27.396 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.396 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.396 spare_malloc 00:14:27.396 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.396 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:27.396 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.396 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.396 spare_delay 00:14:27.396 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.396 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:27.396 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.396 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.396 [2024-12-09 14:47:05.388933] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:27.396 [2024-12-09 14:47:05.388992] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.396 [2024-12-09 14:47:05.389012] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:27.396 [2024-12-09 14:47:05.389023] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.396 [2024-12-09 14:47:05.391117] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.396 [2024-12-09 14:47:05.391157] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:27.396 spare 00:14:27.396 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.396 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:27.396 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.396 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.396 [2024-12-09 14:47:05.400989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:27.396 [2024-12-09 14:47:05.402865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:27.396 [2024-12-09 14:47:05.402927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:27.396 [2024-12-09 14:47:05.402977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:27.396 [2024-12-09 14:47:05.403158] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:27.396 [2024-12-09 14:47:05.403172] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:27.396 [2024-12-09 14:47:05.403494] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:27.397 [2024-12-09 14:47:05.403690] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:27.397 [2024-12-09 14:47:05.403702] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:27.397 [2024-12-09 14:47:05.403868] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:27.397 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.397 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:27.397 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.397 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.397 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:27.397 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:27.397 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:27.397 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.397 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.397 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.397 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.397 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.397 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.397 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.397 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.397 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.397 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.397 "name": "raid_bdev1", 00:14:27.397 "uuid": "64fb0921-c12a-4d6a-b4aa-5ff124698348", 00:14:27.397 "strip_size_kb": 0, 00:14:27.397 "state": "online", 00:14:27.397 "raid_level": "raid1", 00:14:27.397 "superblock": true, 00:14:27.397 "num_base_bdevs": 4, 00:14:27.397 "num_base_bdevs_discovered": 4, 00:14:27.397 "num_base_bdevs_operational": 4, 00:14:27.397 "base_bdevs_list": [ 00:14:27.397 { 00:14:27.397 "name": "BaseBdev1", 00:14:27.397 "uuid": "87d7b64c-6fa5-5858-9d20-6943ddf7d2bc", 00:14:27.397 "is_configured": true, 00:14:27.397 "data_offset": 2048, 00:14:27.397 "data_size": 63488 00:14:27.397 }, 00:14:27.397 { 00:14:27.397 "name": "BaseBdev2", 00:14:27.397 "uuid": "238c3c17-69fa-52c4-ac7b-3c4f9e72dc8f", 00:14:27.397 "is_configured": true, 00:14:27.397 "data_offset": 2048, 00:14:27.397 "data_size": 63488 00:14:27.397 }, 00:14:27.397 { 00:14:27.397 "name": "BaseBdev3", 00:14:27.397 "uuid": "486ffbc9-8e6f-50f5-9278-a185ffdcaa7b", 00:14:27.397 "is_configured": true, 00:14:27.397 "data_offset": 2048, 00:14:27.397 "data_size": 63488 00:14:27.397 }, 00:14:27.397 { 00:14:27.397 "name": "BaseBdev4", 00:14:27.397 "uuid": "c8c4dbcf-caf2-53aa-aefc-84451cce8e8b", 00:14:27.397 "is_configured": true, 00:14:27.397 "data_offset": 2048, 00:14:27.397 "data_size": 63488 00:14:27.397 } 00:14:27.397 ] 00:14:27.397 }' 00:14:27.397 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.397 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.967 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:27.967 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.967 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.967 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:27.967 [2024-12-09 14:47:05.860520] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:27.967 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.967 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:27.967 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.967 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.967 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.967 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:27.967 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.967 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:27.967 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:27.967 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:27.967 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:27.967 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.967 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.967 [2024-12-09 14:47:05.955978] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:27.967 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.967 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:27.967 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.967 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.967 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:27.967 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:27.967 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:27.967 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.967 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.967 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.967 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.967 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.967 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.967 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.967 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.967 14:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.967 14:47:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.967 "name": "raid_bdev1", 00:14:27.967 "uuid": "64fb0921-c12a-4d6a-b4aa-5ff124698348", 00:14:27.967 "strip_size_kb": 0, 00:14:27.967 "state": "online", 00:14:27.967 "raid_level": "raid1", 00:14:27.967 "superblock": true, 00:14:27.967 "num_base_bdevs": 4, 00:14:27.967 "num_base_bdevs_discovered": 3, 00:14:27.967 "num_base_bdevs_operational": 3, 00:14:27.967 "base_bdevs_list": [ 00:14:27.967 { 00:14:27.967 "name": null, 00:14:27.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.967 "is_configured": false, 00:14:27.967 "data_offset": 0, 00:14:27.967 "data_size": 63488 00:14:27.967 }, 00:14:27.967 { 00:14:27.967 "name": "BaseBdev2", 00:14:27.967 "uuid": "238c3c17-69fa-52c4-ac7b-3c4f9e72dc8f", 00:14:27.967 "is_configured": true, 00:14:27.967 "data_offset": 2048, 00:14:27.967 "data_size": 63488 00:14:27.967 }, 00:14:27.967 { 00:14:27.967 "name": "BaseBdev3", 00:14:27.967 "uuid": "486ffbc9-8e6f-50f5-9278-a185ffdcaa7b", 00:14:27.967 "is_configured": true, 00:14:27.967 "data_offset": 2048, 00:14:27.967 "data_size": 63488 00:14:27.967 }, 00:14:27.967 { 00:14:27.967 "name": "BaseBdev4", 00:14:27.967 "uuid": "c8c4dbcf-caf2-53aa-aefc-84451cce8e8b", 00:14:27.967 "is_configured": true, 00:14:27.967 "data_offset": 2048, 00:14:27.967 "data_size": 63488 00:14:27.967 } 00:14:27.967 ] 00:14:27.967 }' 00:14:27.967 14:47:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.967 14:47:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.967 [2024-12-09 14:47:06.064177] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:27.967 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:27.967 Zero copy mechanism will not be used. 00:14:27.967 Running I/O for 60 seconds... 00:14:28.535 14:47:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:28.535 14:47:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.535 14:47:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.535 [2024-12-09 14:47:06.450294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:28.535 14:47:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.535 14:47:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:28.535 [2024-12-09 14:47:06.530207] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:28.535 [2024-12-09 14:47:06.532368] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:28.535 [2024-12-09 14:47:06.654224] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:28.795 [2024-12-09 14:47:06.655848] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:28.795 [2024-12-09 14:47:06.883656] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:28.795 [2024-12-09 14:47:06.884082] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:29.054 174.00 IOPS, 522.00 MiB/s [2024-12-09T14:47:07.176Z] [2024-12-09 14:47:07.145791] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:29.054 [2024-12-09 14:47:07.147315] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:29.313 [2024-12-09 14:47:07.363874] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:29.313 [2024-12-09 14:47:07.364838] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:29.572 14:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.572 14:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.572 14:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.572 14:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.572 14:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.572 14:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.572 14:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.572 14:47:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.572 14:47:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.572 14:47:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.572 14:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.572 "name": "raid_bdev1", 00:14:29.572 "uuid": "64fb0921-c12a-4d6a-b4aa-5ff124698348", 00:14:29.572 "strip_size_kb": 0, 00:14:29.572 "state": "online", 00:14:29.572 "raid_level": "raid1", 00:14:29.572 "superblock": true, 00:14:29.572 "num_base_bdevs": 4, 00:14:29.572 "num_base_bdevs_discovered": 4, 00:14:29.572 "num_base_bdevs_operational": 4, 00:14:29.572 "process": { 00:14:29.572 "type": "rebuild", 00:14:29.572 "target": "spare", 00:14:29.572 "progress": { 00:14:29.572 "blocks": 10240, 00:14:29.572 "percent": 16 00:14:29.572 } 00:14:29.572 }, 00:14:29.572 "base_bdevs_list": [ 00:14:29.572 { 00:14:29.572 "name": "spare", 00:14:29.572 "uuid": "396c2572-82df-59ad-8f04-e81fdc0398d2", 00:14:29.572 "is_configured": true, 00:14:29.572 "data_offset": 2048, 00:14:29.572 "data_size": 63488 00:14:29.572 }, 00:14:29.572 { 00:14:29.572 "name": "BaseBdev2", 00:14:29.572 "uuid": "238c3c17-69fa-52c4-ac7b-3c4f9e72dc8f", 00:14:29.572 "is_configured": true, 00:14:29.572 "data_offset": 2048, 00:14:29.572 "data_size": 63488 00:14:29.572 }, 00:14:29.572 { 00:14:29.572 "name": "BaseBdev3", 00:14:29.572 "uuid": "486ffbc9-8e6f-50f5-9278-a185ffdcaa7b", 00:14:29.572 "is_configured": true, 00:14:29.572 "data_offset": 2048, 00:14:29.572 "data_size": 63488 00:14:29.572 }, 00:14:29.572 { 00:14:29.572 "name": "BaseBdev4", 00:14:29.572 "uuid": "c8c4dbcf-caf2-53aa-aefc-84451cce8e8b", 00:14:29.572 "is_configured": true, 00:14:29.572 "data_offset": 2048, 00:14:29.572 "data_size": 63488 00:14:29.572 } 00:14:29.572 ] 00:14:29.572 }' 00:14:29.572 14:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.572 14:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:29.572 14:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.572 14:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.572 14:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:29.572 14:47:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.572 14:47:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.572 [2024-12-09 14:47:07.668481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:29.831 [2024-12-09 14:47:07.715464] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:29.831 [2024-12-09 14:47:07.722561] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:29.831 [2024-12-09 14:47:07.725959] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.831 [2024-12-09 14:47:07.726038] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:29.831 [2024-12-09 14:47:07.726065] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:29.831 [2024-12-09 14:47:07.761211] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:29.831 14:47:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.831 14:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:29.831 14:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.831 14:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.831 14:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:29.832 14:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:29.832 14:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:29.832 14:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.832 14:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.832 14:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.832 14:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.832 14:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.832 14:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.832 14:47:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.832 14:47:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.832 14:47:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.832 14:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.832 "name": "raid_bdev1", 00:14:29.832 "uuid": "64fb0921-c12a-4d6a-b4aa-5ff124698348", 00:14:29.832 "strip_size_kb": 0, 00:14:29.832 "state": "online", 00:14:29.832 "raid_level": "raid1", 00:14:29.832 "superblock": true, 00:14:29.832 "num_base_bdevs": 4, 00:14:29.832 "num_base_bdevs_discovered": 3, 00:14:29.832 "num_base_bdevs_operational": 3, 00:14:29.832 "base_bdevs_list": [ 00:14:29.832 { 00:14:29.832 "name": null, 00:14:29.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.832 "is_configured": false, 00:14:29.832 "data_offset": 0, 00:14:29.832 "data_size": 63488 00:14:29.832 }, 00:14:29.832 { 00:14:29.832 "name": "BaseBdev2", 00:14:29.832 "uuid": "238c3c17-69fa-52c4-ac7b-3c4f9e72dc8f", 00:14:29.832 "is_configured": true, 00:14:29.832 "data_offset": 2048, 00:14:29.832 "data_size": 63488 00:14:29.832 }, 00:14:29.832 { 00:14:29.832 "name": "BaseBdev3", 00:14:29.832 "uuid": "486ffbc9-8e6f-50f5-9278-a185ffdcaa7b", 00:14:29.832 "is_configured": true, 00:14:29.832 "data_offset": 2048, 00:14:29.832 "data_size": 63488 00:14:29.832 }, 00:14:29.832 { 00:14:29.832 "name": "BaseBdev4", 00:14:29.832 "uuid": "c8c4dbcf-caf2-53aa-aefc-84451cce8e8b", 00:14:29.832 "is_configured": true, 00:14:29.832 "data_offset": 2048, 00:14:29.832 "data_size": 63488 00:14:29.832 } 00:14:29.832 ] 00:14:29.832 }' 00:14:29.832 14:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.832 14:47:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.091 136.50 IOPS, 409.50 MiB/s [2024-12-09T14:47:08.214Z] 14:47:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:30.092 14:47:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.092 14:47:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:30.092 14:47:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:30.092 14:47:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.092 14:47:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.092 14:47:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.092 14:47:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.092 14:47:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.351 14:47:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.351 14:47:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.351 "name": "raid_bdev1", 00:14:30.351 "uuid": "64fb0921-c12a-4d6a-b4aa-5ff124698348", 00:14:30.351 "strip_size_kb": 0, 00:14:30.351 "state": "online", 00:14:30.351 "raid_level": "raid1", 00:14:30.351 "superblock": true, 00:14:30.351 "num_base_bdevs": 4, 00:14:30.351 "num_base_bdevs_discovered": 3, 00:14:30.352 "num_base_bdevs_operational": 3, 00:14:30.352 "base_bdevs_list": [ 00:14:30.352 { 00:14:30.352 "name": null, 00:14:30.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.352 "is_configured": false, 00:14:30.352 "data_offset": 0, 00:14:30.352 "data_size": 63488 00:14:30.352 }, 00:14:30.352 { 00:14:30.352 "name": "BaseBdev2", 00:14:30.352 "uuid": "238c3c17-69fa-52c4-ac7b-3c4f9e72dc8f", 00:14:30.352 "is_configured": true, 00:14:30.352 "data_offset": 2048, 00:14:30.352 "data_size": 63488 00:14:30.352 }, 00:14:30.352 { 00:14:30.352 "name": "BaseBdev3", 00:14:30.352 "uuid": "486ffbc9-8e6f-50f5-9278-a185ffdcaa7b", 00:14:30.352 "is_configured": true, 00:14:30.352 "data_offset": 2048, 00:14:30.352 "data_size": 63488 00:14:30.352 }, 00:14:30.352 { 00:14:30.352 "name": "BaseBdev4", 00:14:30.352 "uuid": "c8c4dbcf-caf2-53aa-aefc-84451cce8e8b", 00:14:30.352 "is_configured": true, 00:14:30.352 "data_offset": 2048, 00:14:30.352 "data_size": 63488 00:14:30.352 } 00:14:30.352 ] 00:14:30.352 }' 00:14:30.352 14:47:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.352 14:47:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:30.352 14:47:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.352 14:47:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:30.352 14:47:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:30.352 14:47:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.352 14:47:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.352 [2024-12-09 14:47:08.334269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:30.352 14:47:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.352 14:47:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:30.352 [2024-12-09 14:47:08.402347] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:30.352 [2024-12-09 14:47:08.404524] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:30.611 [2024-12-09 14:47:08.514749] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:30.611 [2024-12-09 14:47:08.516410] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:30.870 [2024-12-09 14:47:08.734968] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:30.870 [2024-12-09 14:47:08.735918] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:31.130 136.33 IOPS, 409.00 MiB/s [2024-12-09T14:47:09.252Z] [2024-12-09 14:47:09.087697] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:31.130 [2024-12-09 14:47:09.222850] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:31.400 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:31.400 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.400 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:31.400 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:31.400 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.400 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.400 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.400 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.400 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.400 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.400 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.400 "name": "raid_bdev1", 00:14:31.400 "uuid": "64fb0921-c12a-4d6a-b4aa-5ff124698348", 00:14:31.400 "strip_size_kb": 0, 00:14:31.400 "state": "online", 00:14:31.400 "raid_level": "raid1", 00:14:31.400 "superblock": true, 00:14:31.400 "num_base_bdevs": 4, 00:14:31.400 "num_base_bdevs_discovered": 4, 00:14:31.400 "num_base_bdevs_operational": 4, 00:14:31.400 "process": { 00:14:31.400 "type": "rebuild", 00:14:31.400 "target": "spare", 00:14:31.400 "progress": { 00:14:31.400 "blocks": 10240, 00:14:31.400 "percent": 16 00:14:31.400 } 00:14:31.400 }, 00:14:31.400 "base_bdevs_list": [ 00:14:31.400 { 00:14:31.400 "name": "spare", 00:14:31.400 "uuid": "396c2572-82df-59ad-8f04-e81fdc0398d2", 00:14:31.400 "is_configured": true, 00:14:31.400 "data_offset": 2048, 00:14:31.400 "data_size": 63488 00:14:31.400 }, 00:14:31.400 { 00:14:31.400 "name": "BaseBdev2", 00:14:31.400 "uuid": "238c3c17-69fa-52c4-ac7b-3c4f9e72dc8f", 00:14:31.400 "is_configured": true, 00:14:31.400 "data_offset": 2048, 00:14:31.400 "data_size": 63488 00:14:31.400 }, 00:14:31.400 { 00:14:31.400 "name": "BaseBdev3", 00:14:31.400 "uuid": "486ffbc9-8e6f-50f5-9278-a185ffdcaa7b", 00:14:31.400 "is_configured": true, 00:14:31.400 "data_offset": 2048, 00:14:31.400 "data_size": 63488 00:14:31.400 }, 00:14:31.400 { 00:14:31.400 "name": "BaseBdev4", 00:14:31.400 "uuid": "c8c4dbcf-caf2-53aa-aefc-84451cce8e8b", 00:14:31.400 "is_configured": true, 00:14:31.400 "data_offset": 2048, 00:14:31.400 "data_size": 63488 00:14:31.400 } 00:14:31.400 ] 00:14:31.400 }' 00:14:31.400 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.400 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:31.400 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.660 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:31.660 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:31.660 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:31.660 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:31.660 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:31.660 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:31.660 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:31.660 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:31.660 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.660 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.660 [2024-12-09 14:47:09.547072] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:31.660 [2024-12-09 14:47:09.732274] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:31.660 [2024-12-09 14:47:09.732425] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:31.660 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.660 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:31.660 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:31.660 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:31.660 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.660 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:31.660 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:31.660 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.660 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.660 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.660 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.660 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.660 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.920 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.920 "name": "raid_bdev1", 00:14:31.920 "uuid": "64fb0921-c12a-4d6a-b4aa-5ff124698348", 00:14:31.920 "strip_size_kb": 0, 00:14:31.920 "state": "online", 00:14:31.920 "raid_level": "raid1", 00:14:31.920 "superblock": true, 00:14:31.920 "num_base_bdevs": 4, 00:14:31.920 "num_base_bdevs_discovered": 3, 00:14:31.920 "num_base_bdevs_operational": 3, 00:14:31.920 "process": { 00:14:31.920 "type": "rebuild", 00:14:31.920 "target": "spare", 00:14:31.920 "progress": { 00:14:31.920 "blocks": 14336, 00:14:31.920 "percent": 22 00:14:31.920 } 00:14:31.920 }, 00:14:31.920 "base_bdevs_list": [ 00:14:31.920 { 00:14:31.920 "name": "spare", 00:14:31.920 "uuid": "396c2572-82df-59ad-8f04-e81fdc0398d2", 00:14:31.920 "is_configured": true, 00:14:31.920 "data_offset": 2048, 00:14:31.920 "data_size": 63488 00:14:31.920 }, 00:14:31.920 { 00:14:31.920 "name": null, 00:14:31.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.920 "is_configured": false, 00:14:31.920 "data_offset": 0, 00:14:31.920 "data_size": 63488 00:14:31.920 }, 00:14:31.920 { 00:14:31.920 "name": "BaseBdev3", 00:14:31.920 "uuid": "486ffbc9-8e6f-50f5-9278-a185ffdcaa7b", 00:14:31.920 "is_configured": true, 00:14:31.920 "data_offset": 2048, 00:14:31.920 "data_size": 63488 00:14:31.920 }, 00:14:31.920 { 00:14:31.920 "name": "BaseBdev4", 00:14:31.920 "uuid": "c8c4dbcf-caf2-53aa-aefc-84451cce8e8b", 00:14:31.920 "is_configured": true, 00:14:31.920 "data_offset": 2048, 00:14:31.920 "data_size": 63488 00:14:31.920 } 00:14:31.920 ] 00:14:31.920 }' 00:14:31.920 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.920 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:31.920 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.920 [2024-12-09 14:47:09.850735] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:31.920 [2024-12-09 14:47:09.851153] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:31.920 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:31.920 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=501 00:14:31.920 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:31.920 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:31.920 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.920 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:31.920 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:31.920 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.920 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.920 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.920 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.920 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.920 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.920 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.920 "name": "raid_bdev1", 00:14:31.920 "uuid": "64fb0921-c12a-4d6a-b4aa-5ff124698348", 00:14:31.920 "strip_size_kb": 0, 00:14:31.920 "state": "online", 00:14:31.920 "raid_level": "raid1", 00:14:31.920 "superblock": true, 00:14:31.920 "num_base_bdevs": 4, 00:14:31.920 "num_base_bdevs_discovered": 3, 00:14:31.920 "num_base_bdevs_operational": 3, 00:14:31.920 "process": { 00:14:31.920 "type": "rebuild", 00:14:31.920 "target": "spare", 00:14:31.920 "progress": { 00:14:31.920 "blocks": 16384, 00:14:31.920 "percent": 25 00:14:31.920 } 00:14:31.920 }, 00:14:31.920 "base_bdevs_list": [ 00:14:31.920 { 00:14:31.920 "name": "spare", 00:14:31.920 "uuid": "396c2572-82df-59ad-8f04-e81fdc0398d2", 00:14:31.920 "is_configured": true, 00:14:31.920 "data_offset": 2048, 00:14:31.920 "data_size": 63488 00:14:31.920 }, 00:14:31.920 { 00:14:31.920 "name": null, 00:14:31.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.920 "is_configured": false, 00:14:31.920 "data_offset": 0, 00:14:31.920 "data_size": 63488 00:14:31.920 }, 00:14:31.920 { 00:14:31.920 "name": "BaseBdev3", 00:14:31.920 "uuid": "486ffbc9-8e6f-50f5-9278-a185ffdcaa7b", 00:14:31.920 "is_configured": true, 00:14:31.920 "data_offset": 2048, 00:14:31.920 "data_size": 63488 00:14:31.920 }, 00:14:31.920 { 00:14:31.920 "name": "BaseBdev4", 00:14:31.920 "uuid": "c8c4dbcf-caf2-53aa-aefc-84451cce8e8b", 00:14:31.920 "is_configured": true, 00:14:31.920 "data_offset": 2048, 00:14:31.920 "data_size": 63488 00:14:31.920 } 00:14:31.920 ] 00:14:31.920 }' 00:14:31.920 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.920 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:31.920 14:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.920 14:47:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:31.920 14:47:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:32.180 122.75 IOPS, 368.25 MiB/s [2024-12-09T14:47:10.302Z] [2024-12-09 14:47:10.187537] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:32.438 [2024-12-09 14:47:10.342124] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:32.698 [2024-12-09 14:47:10.568663] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:32.957 [2024-12-09 14:47:10.904014] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:32.957 [2024-12-09 14:47:11.020536] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:32.957 14:47:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:32.957 14:47:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:32.957 14:47:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:32.957 14:47:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:32.957 14:47:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:32.957 14:47:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:32.957 14:47:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.957 14:47:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.957 14:47:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.957 14:47:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.957 109.80 IOPS, 329.40 MiB/s [2024-12-09T14:47:11.079Z] 14:47:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.957 14:47:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.957 "name": "raid_bdev1", 00:14:32.957 "uuid": "64fb0921-c12a-4d6a-b4aa-5ff124698348", 00:14:32.957 "strip_size_kb": 0, 00:14:32.957 "state": "online", 00:14:32.957 "raid_level": "raid1", 00:14:32.957 "superblock": true, 00:14:32.957 "num_base_bdevs": 4, 00:14:32.957 "num_base_bdevs_discovered": 3, 00:14:32.957 "num_base_bdevs_operational": 3, 00:14:32.957 "process": { 00:14:32.957 "type": "rebuild", 00:14:32.957 "target": "spare", 00:14:32.957 "progress": { 00:14:32.957 "blocks": 34816, 00:14:32.957 "percent": 54 00:14:32.957 } 00:14:32.957 }, 00:14:32.957 "base_bdevs_list": [ 00:14:32.957 { 00:14:32.957 "name": "spare", 00:14:32.957 "uuid": "396c2572-82df-59ad-8f04-e81fdc0398d2", 00:14:32.957 "is_configured": true, 00:14:32.957 "data_offset": 2048, 00:14:32.957 "data_size": 63488 00:14:32.957 }, 00:14:32.957 { 00:14:32.957 "name": null, 00:14:32.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.957 "is_configured": false, 00:14:32.957 "data_offset": 0, 00:14:32.957 "data_size": 63488 00:14:32.957 }, 00:14:32.957 { 00:14:32.957 "name": "BaseBdev3", 00:14:32.957 "uuid": "486ffbc9-8e6f-50f5-9278-a185ffdcaa7b", 00:14:32.957 "is_configured": true, 00:14:32.958 "data_offset": 2048, 00:14:32.958 "data_size": 63488 00:14:32.958 }, 00:14:32.958 { 00:14:32.958 "name": "BaseBdev4", 00:14:32.958 "uuid": "c8c4dbcf-caf2-53aa-aefc-84451cce8e8b", 00:14:32.958 "is_configured": true, 00:14:32.958 "data_offset": 2048, 00:14:32.958 "data_size": 63488 00:14:32.958 } 00:14:32.958 ] 00:14:32.958 }' 00:14:32.958 14:47:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.217 14:47:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:33.217 14:47:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.217 14:47:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:33.217 14:47:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:33.786 [2024-12-09 14:47:11.711230] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:34.045 [2024-12-09 14:47:11.928774] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:34.045 99.67 IOPS, 299.00 MiB/s [2024-12-09T14:47:12.167Z] 14:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:34.045 14:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:34.045 14:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.045 14:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:34.045 14:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:34.045 14:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.045 14:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.045 14:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.045 14:47:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.045 14:47:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.305 14:47:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.305 14:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.305 "name": "raid_bdev1", 00:14:34.305 "uuid": "64fb0921-c12a-4d6a-b4aa-5ff124698348", 00:14:34.305 "strip_size_kb": 0, 00:14:34.305 "state": "online", 00:14:34.305 "raid_level": "raid1", 00:14:34.305 "superblock": true, 00:14:34.305 "num_base_bdevs": 4, 00:14:34.305 "num_base_bdevs_discovered": 3, 00:14:34.305 "num_base_bdevs_operational": 3, 00:14:34.305 "process": { 00:14:34.305 "type": "rebuild", 00:14:34.305 "target": "spare", 00:14:34.305 "progress": { 00:14:34.305 "blocks": 51200, 00:14:34.305 "percent": 80 00:14:34.305 } 00:14:34.305 }, 00:14:34.305 "base_bdevs_list": [ 00:14:34.305 { 00:14:34.305 "name": "spare", 00:14:34.305 "uuid": "396c2572-82df-59ad-8f04-e81fdc0398d2", 00:14:34.305 "is_configured": true, 00:14:34.305 "data_offset": 2048, 00:14:34.305 "data_size": 63488 00:14:34.305 }, 00:14:34.305 { 00:14:34.305 "name": null, 00:14:34.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.305 "is_configured": false, 00:14:34.305 "data_offset": 0, 00:14:34.305 "data_size": 63488 00:14:34.305 }, 00:14:34.305 { 00:14:34.305 "name": "BaseBdev3", 00:14:34.305 "uuid": "486ffbc9-8e6f-50f5-9278-a185ffdcaa7b", 00:14:34.305 "is_configured": true, 00:14:34.305 "data_offset": 2048, 00:14:34.305 "data_size": 63488 00:14:34.305 }, 00:14:34.305 { 00:14:34.305 "name": "BaseBdev4", 00:14:34.305 "uuid": "c8c4dbcf-caf2-53aa-aefc-84451cce8e8b", 00:14:34.305 "is_configured": true, 00:14:34.305 "data_offset": 2048, 00:14:34.305 "data_size": 63488 00:14:34.305 } 00:14:34.305 ] 00:14:34.305 }' 00:14:34.305 14:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.305 [2024-12-09 14:47:12.250316] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:14:34.305 [2024-12-09 14:47:12.250940] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:14:34.305 14:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:34.305 14:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.305 14:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:34.305 14:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:34.565 [2024-12-09 14:47:12.586191] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:14:35.133 [2024-12-09 14:47:13.021392] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:35.133 91.43 IOPS, 274.29 MiB/s [2024-12-09T14:47:13.255Z] [2024-12-09 14:47:13.119402] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:35.133 [2024-12-09 14:47:13.122730] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.392 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:35.392 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:35.392 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.392 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:35.392 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:35.392 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.392 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.392 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.392 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.392 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.392 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.392 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.392 "name": "raid_bdev1", 00:14:35.392 "uuid": "64fb0921-c12a-4d6a-b4aa-5ff124698348", 00:14:35.392 "strip_size_kb": 0, 00:14:35.392 "state": "online", 00:14:35.392 "raid_level": "raid1", 00:14:35.392 "superblock": true, 00:14:35.392 "num_base_bdevs": 4, 00:14:35.392 "num_base_bdevs_discovered": 3, 00:14:35.392 "num_base_bdevs_operational": 3, 00:14:35.392 "base_bdevs_list": [ 00:14:35.392 { 00:14:35.392 "name": "spare", 00:14:35.392 "uuid": "396c2572-82df-59ad-8f04-e81fdc0398d2", 00:14:35.392 "is_configured": true, 00:14:35.392 "data_offset": 2048, 00:14:35.393 "data_size": 63488 00:14:35.393 }, 00:14:35.393 { 00:14:35.393 "name": null, 00:14:35.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.393 "is_configured": false, 00:14:35.393 "data_offset": 0, 00:14:35.393 "data_size": 63488 00:14:35.393 }, 00:14:35.393 { 00:14:35.393 "name": "BaseBdev3", 00:14:35.393 "uuid": "486ffbc9-8e6f-50f5-9278-a185ffdcaa7b", 00:14:35.393 "is_configured": true, 00:14:35.393 "data_offset": 2048, 00:14:35.393 "data_size": 63488 00:14:35.393 }, 00:14:35.393 { 00:14:35.393 "name": "BaseBdev4", 00:14:35.393 "uuid": "c8c4dbcf-caf2-53aa-aefc-84451cce8e8b", 00:14:35.393 "is_configured": true, 00:14:35.393 "data_offset": 2048, 00:14:35.393 "data_size": 63488 00:14:35.393 } 00:14:35.393 ] 00:14:35.393 }' 00:14:35.393 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.393 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:35.393 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.393 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:35.393 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:35.393 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:35.393 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.393 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:35.393 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:35.393 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.393 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.393 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.393 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.393 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.393 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.393 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.393 "name": "raid_bdev1", 00:14:35.393 "uuid": "64fb0921-c12a-4d6a-b4aa-5ff124698348", 00:14:35.393 "strip_size_kb": 0, 00:14:35.393 "state": "online", 00:14:35.393 "raid_level": "raid1", 00:14:35.393 "superblock": true, 00:14:35.393 "num_base_bdevs": 4, 00:14:35.393 "num_base_bdevs_discovered": 3, 00:14:35.393 "num_base_bdevs_operational": 3, 00:14:35.393 "base_bdevs_list": [ 00:14:35.393 { 00:14:35.393 "name": "spare", 00:14:35.393 "uuid": "396c2572-82df-59ad-8f04-e81fdc0398d2", 00:14:35.393 "is_configured": true, 00:14:35.393 "data_offset": 2048, 00:14:35.393 "data_size": 63488 00:14:35.393 }, 00:14:35.393 { 00:14:35.393 "name": null, 00:14:35.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.393 "is_configured": false, 00:14:35.393 "data_offset": 0, 00:14:35.393 "data_size": 63488 00:14:35.393 }, 00:14:35.393 { 00:14:35.393 "name": "BaseBdev3", 00:14:35.393 "uuid": "486ffbc9-8e6f-50f5-9278-a185ffdcaa7b", 00:14:35.393 "is_configured": true, 00:14:35.393 "data_offset": 2048, 00:14:35.393 "data_size": 63488 00:14:35.393 }, 00:14:35.393 { 00:14:35.393 "name": "BaseBdev4", 00:14:35.393 "uuid": "c8c4dbcf-caf2-53aa-aefc-84451cce8e8b", 00:14:35.393 "is_configured": true, 00:14:35.393 "data_offset": 2048, 00:14:35.393 "data_size": 63488 00:14:35.393 } 00:14:35.393 ] 00:14:35.393 }' 00:14:35.652 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.652 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:35.652 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.652 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:35.652 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:35.652 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:35.652 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:35.652 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:35.652 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:35.652 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:35.652 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.652 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.652 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.652 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.652 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.652 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.652 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.652 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.652 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.652 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.652 "name": "raid_bdev1", 00:14:35.652 "uuid": "64fb0921-c12a-4d6a-b4aa-5ff124698348", 00:14:35.652 "strip_size_kb": 0, 00:14:35.652 "state": "online", 00:14:35.652 "raid_level": "raid1", 00:14:35.652 "superblock": true, 00:14:35.652 "num_base_bdevs": 4, 00:14:35.652 "num_base_bdevs_discovered": 3, 00:14:35.652 "num_base_bdevs_operational": 3, 00:14:35.652 "base_bdevs_list": [ 00:14:35.652 { 00:14:35.652 "name": "spare", 00:14:35.652 "uuid": "396c2572-82df-59ad-8f04-e81fdc0398d2", 00:14:35.652 "is_configured": true, 00:14:35.652 "data_offset": 2048, 00:14:35.652 "data_size": 63488 00:14:35.652 }, 00:14:35.652 { 00:14:35.652 "name": null, 00:14:35.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.652 "is_configured": false, 00:14:35.652 "data_offset": 0, 00:14:35.652 "data_size": 63488 00:14:35.652 }, 00:14:35.652 { 00:14:35.652 "name": "BaseBdev3", 00:14:35.652 "uuid": "486ffbc9-8e6f-50f5-9278-a185ffdcaa7b", 00:14:35.652 "is_configured": true, 00:14:35.652 "data_offset": 2048, 00:14:35.652 "data_size": 63488 00:14:35.652 }, 00:14:35.652 { 00:14:35.652 "name": "BaseBdev4", 00:14:35.652 "uuid": "c8c4dbcf-caf2-53aa-aefc-84451cce8e8b", 00:14:35.652 "is_configured": true, 00:14:35.652 "data_offset": 2048, 00:14:35.652 "data_size": 63488 00:14:35.652 } 00:14:35.652 ] 00:14:35.652 }' 00:14:35.652 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.652 14:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.220 83.12 IOPS, 249.38 MiB/s [2024-12-09T14:47:14.342Z] 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:36.220 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.220 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.220 [2024-12-09 14:47:14.096869] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:36.220 [2024-12-09 14:47:14.096975] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:36.220 00:14:36.220 Latency(us) 00:14:36.220 [2024-12-09T14:47:14.342Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:36.220 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:36.220 raid_bdev1 : 8.15 82.13 246.38 0.00 0.00 15857.10 341.63 118136.51 00:14:36.220 [2024-12-09T14:47:14.342Z] =================================================================================================================== 00:14:36.220 [2024-12-09T14:47:14.342Z] Total : 82.13 246.38 0.00 0.00 15857.10 341.63 118136.51 00:14:36.220 [2024-12-09 14:47:14.220501] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:36.220 [2024-12-09 14:47:14.220684] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.220 [2024-12-09 14:47:14.220817] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:36.220 [2024-12-09 14:47:14.220888] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:36.220 { 00:14:36.220 "results": [ 00:14:36.220 { 00:14:36.220 "job": "raid_bdev1", 00:14:36.220 "core_mask": "0x1", 00:14:36.220 "workload": "randrw", 00:14:36.220 "percentage": 50, 00:14:36.220 "status": "finished", 00:14:36.220 "queue_depth": 2, 00:14:36.220 "io_size": 3145728, 00:14:36.220 "runtime": 8.146032, 00:14:36.220 "iops": 82.12587429069761, 00:14:36.220 "mibps": 246.37762287209284, 00:14:36.220 "io_failed": 0, 00:14:36.220 "io_timeout": 0, 00:14:36.220 "avg_latency_us": 15857.099293085555, 00:14:36.220 "min_latency_us": 341.63144104803496, 00:14:36.220 "max_latency_us": 118136.51004366812 00:14:36.220 } 00:14:36.220 ], 00:14:36.220 "core_count": 1 00:14:36.220 } 00:14:36.220 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.220 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.220 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.220 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:36.220 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.220 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.220 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:36.220 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:36.220 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:36.220 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:36.220 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:36.220 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:36.220 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:36.220 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:36.220 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:36.220 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:36.220 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:36.220 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:36.220 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:36.479 /dev/nbd0 00:14:36.479 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:36.479 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:36.479 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:36.479 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:36.479 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:36.479 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:36.479 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:36.479 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:36.479 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:36.479 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:36.479 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:36.479 1+0 records in 00:14:36.479 1+0 records out 00:14:36.479 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424324 s, 9.7 MB/s 00:14:36.479 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.479 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:36.479 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.479 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:36.479 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:36.479 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:36.479 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:36.479 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:36.479 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:36.479 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:36.479 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:36.479 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:36.479 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:36.479 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:36.479 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:36.479 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:36.479 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:36.479 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:36.479 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:36.479 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:36.479 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:36.479 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:36.738 /dev/nbd1 00:14:36.738 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:36.738 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:36.738 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:36.738 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:36.738 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:36.738 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:36.738 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:36.738 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:36.738 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:36.738 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:36.738 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:36.738 1+0 records in 00:14:36.738 1+0 records out 00:14:36.738 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320886 s, 12.8 MB/s 00:14:36.738 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.738 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:36.738 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.738 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:36.738 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:36.738 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:36.738 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:36.738 14:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:36.998 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:36.998 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:36.998 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:36.998 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:36.998 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:36.998 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:36.998 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:37.257 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:37.257 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:37.257 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:37.257 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:37.257 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:37.257 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:37.257 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:37.257 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:37.257 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:37.257 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:37.257 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:37.257 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:37.257 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:37.257 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:37.257 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:37.257 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:37.257 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:37.257 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:37.257 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:37.257 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:37.515 /dev/nbd1 00:14:37.515 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:37.515 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:37.515 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:37.515 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:37.515 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:37.515 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:37.515 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:37.515 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:37.515 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:37.515 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:37.515 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:37.515 1+0 records in 00:14:37.515 1+0 records out 00:14:37.515 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000649173 s, 6.3 MB/s 00:14:37.515 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:37.515 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:37.515 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:37.515 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:37.515 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:37.515 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:37.515 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:37.515 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:37.515 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:37.515 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:37.515 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:37.515 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:37.515 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:37.515 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:37.515 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:37.779 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:37.779 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:37.779 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:37.779 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:37.779 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:37.779 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:37.779 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:37.779 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:37.779 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:37.779 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:37.779 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:37.779 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:37.779 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:37.779 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:37.779 14:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:38.039 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:38.039 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:38.039 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:38.039 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:38.039 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:38.039 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:38.039 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:38.039 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:38.039 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:38.039 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:38.039 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.039 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.039 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.039 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:38.039 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.039 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.039 [2024-12-09 14:47:16.063546] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:38.039 [2024-12-09 14:47:16.063657] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.039 [2024-12-09 14:47:16.063712] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:38.039 [2024-12-09 14:47:16.063749] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.039 [2024-12-09 14:47:16.065932] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.039 [2024-12-09 14:47:16.066002] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:38.039 [2024-12-09 14:47:16.066143] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:38.039 [2024-12-09 14:47:16.066228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:38.039 [2024-12-09 14:47:16.066416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:38.039 [2024-12-09 14:47:16.066551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:38.039 spare 00:14:38.039 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.039 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:38.039 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.039 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.299 [2024-12-09 14:47:16.166508] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:38.299 [2024-12-09 14:47:16.166616] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:38.299 [2024-12-09 14:47:16.166965] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:14:38.299 [2024-12-09 14:47:16.167157] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:38.299 [2024-12-09 14:47:16.167171] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:38.299 [2024-12-09 14:47:16.167418] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:38.299 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.299 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:38.299 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.299 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.299 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:38.299 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:38.299 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.299 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.299 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.299 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.299 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.299 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.299 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.299 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.299 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.299 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.299 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.299 "name": "raid_bdev1", 00:14:38.299 "uuid": "64fb0921-c12a-4d6a-b4aa-5ff124698348", 00:14:38.299 "strip_size_kb": 0, 00:14:38.299 "state": "online", 00:14:38.299 "raid_level": "raid1", 00:14:38.299 "superblock": true, 00:14:38.299 "num_base_bdevs": 4, 00:14:38.299 "num_base_bdevs_discovered": 3, 00:14:38.299 "num_base_bdevs_operational": 3, 00:14:38.299 "base_bdevs_list": [ 00:14:38.299 { 00:14:38.299 "name": "spare", 00:14:38.299 "uuid": "396c2572-82df-59ad-8f04-e81fdc0398d2", 00:14:38.299 "is_configured": true, 00:14:38.299 "data_offset": 2048, 00:14:38.299 "data_size": 63488 00:14:38.299 }, 00:14:38.299 { 00:14:38.299 "name": null, 00:14:38.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.299 "is_configured": false, 00:14:38.299 "data_offset": 2048, 00:14:38.299 "data_size": 63488 00:14:38.299 }, 00:14:38.299 { 00:14:38.299 "name": "BaseBdev3", 00:14:38.299 "uuid": "486ffbc9-8e6f-50f5-9278-a185ffdcaa7b", 00:14:38.299 "is_configured": true, 00:14:38.299 "data_offset": 2048, 00:14:38.299 "data_size": 63488 00:14:38.299 }, 00:14:38.299 { 00:14:38.299 "name": "BaseBdev4", 00:14:38.299 "uuid": "c8c4dbcf-caf2-53aa-aefc-84451cce8e8b", 00:14:38.299 "is_configured": true, 00:14:38.299 "data_offset": 2048, 00:14:38.299 "data_size": 63488 00:14:38.299 } 00:14:38.299 ] 00:14:38.299 }' 00:14:38.299 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.299 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.558 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:38.558 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.558 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:38.558 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:38.558 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.558 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.558 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.558 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.558 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.558 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.818 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.818 "name": "raid_bdev1", 00:14:38.818 "uuid": "64fb0921-c12a-4d6a-b4aa-5ff124698348", 00:14:38.818 "strip_size_kb": 0, 00:14:38.818 "state": "online", 00:14:38.818 "raid_level": "raid1", 00:14:38.818 "superblock": true, 00:14:38.818 "num_base_bdevs": 4, 00:14:38.818 "num_base_bdevs_discovered": 3, 00:14:38.818 "num_base_bdevs_operational": 3, 00:14:38.818 "base_bdevs_list": [ 00:14:38.818 { 00:14:38.818 "name": "spare", 00:14:38.818 "uuid": "396c2572-82df-59ad-8f04-e81fdc0398d2", 00:14:38.818 "is_configured": true, 00:14:38.818 "data_offset": 2048, 00:14:38.818 "data_size": 63488 00:14:38.818 }, 00:14:38.818 { 00:14:38.818 "name": null, 00:14:38.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.818 "is_configured": false, 00:14:38.818 "data_offset": 2048, 00:14:38.818 "data_size": 63488 00:14:38.818 }, 00:14:38.818 { 00:14:38.818 "name": "BaseBdev3", 00:14:38.818 "uuid": "486ffbc9-8e6f-50f5-9278-a185ffdcaa7b", 00:14:38.818 "is_configured": true, 00:14:38.818 "data_offset": 2048, 00:14:38.818 "data_size": 63488 00:14:38.818 }, 00:14:38.818 { 00:14:38.818 "name": "BaseBdev4", 00:14:38.818 "uuid": "c8c4dbcf-caf2-53aa-aefc-84451cce8e8b", 00:14:38.818 "is_configured": true, 00:14:38.818 "data_offset": 2048, 00:14:38.818 "data_size": 63488 00:14:38.818 } 00:14:38.818 ] 00:14:38.818 }' 00:14:38.818 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.818 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:38.818 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.818 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:38.818 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.818 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:38.818 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.818 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.818 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.818 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:38.818 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:38.818 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.818 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.818 [2024-12-09 14:47:16.834568] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:38.818 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.818 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:38.818 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.818 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.818 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:38.818 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:38.818 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:38.818 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.818 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.818 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.818 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.818 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.818 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.818 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.818 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.818 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.818 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.818 "name": "raid_bdev1", 00:14:38.818 "uuid": "64fb0921-c12a-4d6a-b4aa-5ff124698348", 00:14:38.818 "strip_size_kb": 0, 00:14:38.818 "state": "online", 00:14:38.818 "raid_level": "raid1", 00:14:38.818 "superblock": true, 00:14:38.818 "num_base_bdevs": 4, 00:14:38.818 "num_base_bdevs_discovered": 2, 00:14:38.818 "num_base_bdevs_operational": 2, 00:14:38.818 "base_bdevs_list": [ 00:14:38.818 { 00:14:38.818 "name": null, 00:14:38.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.818 "is_configured": false, 00:14:38.818 "data_offset": 0, 00:14:38.818 "data_size": 63488 00:14:38.818 }, 00:14:38.818 { 00:14:38.818 "name": null, 00:14:38.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.818 "is_configured": false, 00:14:38.818 "data_offset": 2048, 00:14:38.818 "data_size": 63488 00:14:38.818 }, 00:14:38.818 { 00:14:38.818 "name": "BaseBdev3", 00:14:38.818 "uuid": "486ffbc9-8e6f-50f5-9278-a185ffdcaa7b", 00:14:38.818 "is_configured": true, 00:14:38.818 "data_offset": 2048, 00:14:38.818 "data_size": 63488 00:14:38.818 }, 00:14:38.818 { 00:14:38.818 "name": "BaseBdev4", 00:14:38.818 "uuid": "c8c4dbcf-caf2-53aa-aefc-84451cce8e8b", 00:14:38.818 "is_configured": true, 00:14:38.818 "data_offset": 2048, 00:14:38.818 "data_size": 63488 00:14:38.818 } 00:14:38.818 ] 00:14:38.818 }' 00:14:38.818 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.818 14:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.388 14:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:39.389 14:47:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.389 14:47:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.389 [2024-12-09 14:47:17.305823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:39.389 [2024-12-09 14:47:17.306089] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:39.389 [2024-12-09 14:47:17.306148] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:39.389 [2024-12-09 14:47:17.306189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:39.389 [2024-12-09 14:47:17.320880] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:14:39.389 14:47:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.389 14:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:39.389 [2024-12-09 14:47:17.322721] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:40.327 14:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:40.327 14:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.327 14:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:40.327 14:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:40.327 14:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.327 14:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.327 14:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.327 14:47:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.327 14:47:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.327 14:47:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.327 14:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.327 "name": "raid_bdev1", 00:14:40.327 "uuid": "64fb0921-c12a-4d6a-b4aa-5ff124698348", 00:14:40.327 "strip_size_kb": 0, 00:14:40.327 "state": "online", 00:14:40.327 "raid_level": "raid1", 00:14:40.327 "superblock": true, 00:14:40.327 "num_base_bdevs": 4, 00:14:40.327 "num_base_bdevs_discovered": 3, 00:14:40.327 "num_base_bdevs_operational": 3, 00:14:40.327 "process": { 00:14:40.327 "type": "rebuild", 00:14:40.327 "target": "spare", 00:14:40.327 "progress": { 00:14:40.327 "blocks": 20480, 00:14:40.327 "percent": 32 00:14:40.327 } 00:14:40.327 }, 00:14:40.327 "base_bdevs_list": [ 00:14:40.327 { 00:14:40.327 "name": "spare", 00:14:40.327 "uuid": "396c2572-82df-59ad-8f04-e81fdc0398d2", 00:14:40.327 "is_configured": true, 00:14:40.327 "data_offset": 2048, 00:14:40.327 "data_size": 63488 00:14:40.327 }, 00:14:40.327 { 00:14:40.327 "name": null, 00:14:40.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.327 "is_configured": false, 00:14:40.327 "data_offset": 2048, 00:14:40.327 "data_size": 63488 00:14:40.327 }, 00:14:40.327 { 00:14:40.327 "name": "BaseBdev3", 00:14:40.327 "uuid": "486ffbc9-8e6f-50f5-9278-a185ffdcaa7b", 00:14:40.327 "is_configured": true, 00:14:40.327 "data_offset": 2048, 00:14:40.327 "data_size": 63488 00:14:40.327 }, 00:14:40.327 { 00:14:40.327 "name": "BaseBdev4", 00:14:40.327 "uuid": "c8c4dbcf-caf2-53aa-aefc-84451cce8e8b", 00:14:40.327 "is_configured": true, 00:14:40.327 "data_offset": 2048, 00:14:40.327 "data_size": 63488 00:14:40.327 } 00:14:40.327 ] 00:14:40.327 }' 00:14:40.327 14:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.327 14:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:40.327 14:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.587 14:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:40.587 14:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:40.587 14:47:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.587 14:47:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.587 [2024-12-09 14:47:18.466405] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:40.587 [2024-12-09 14:47:18.528651] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:40.587 [2024-12-09 14:47:18.528712] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.587 [2024-12-09 14:47:18.528730] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:40.587 [2024-12-09 14:47:18.528736] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:40.587 14:47:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.587 14:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:40.587 14:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.587 14:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.587 14:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:40.587 14:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:40.587 14:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:40.587 14:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.587 14:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.587 14:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.587 14:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.587 14:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.587 14:47:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.587 14:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.587 14:47:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.587 14:47:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.587 14:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.587 "name": "raid_bdev1", 00:14:40.587 "uuid": "64fb0921-c12a-4d6a-b4aa-5ff124698348", 00:14:40.587 "strip_size_kb": 0, 00:14:40.587 "state": "online", 00:14:40.587 "raid_level": "raid1", 00:14:40.587 "superblock": true, 00:14:40.587 "num_base_bdevs": 4, 00:14:40.587 "num_base_bdevs_discovered": 2, 00:14:40.587 "num_base_bdevs_operational": 2, 00:14:40.587 "base_bdevs_list": [ 00:14:40.587 { 00:14:40.587 "name": null, 00:14:40.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.587 "is_configured": false, 00:14:40.587 "data_offset": 0, 00:14:40.587 "data_size": 63488 00:14:40.587 }, 00:14:40.587 { 00:14:40.587 "name": null, 00:14:40.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.587 "is_configured": false, 00:14:40.587 "data_offset": 2048, 00:14:40.587 "data_size": 63488 00:14:40.587 }, 00:14:40.587 { 00:14:40.587 "name": "BaseBdev3", 00:14:40.587 "uuid": "486ffbc9-8e6f-50f5-9278-a185ffdcaa7b", 00:14:40.587 "is_configured": true, 00:14:40.587 "data_offset": 2048, 00:14:40.587 "data_size": 63488 00:14:40.587 }, 00:14:40.587 { 00:14:40.587 "name": "BaseBdev4", 00:14:40.587 "uuid": "c8c4dbcf-caf2-53aa-aefc-84451cce8e8b", 00:14:40.587 "is_configured": true, 00:14:40.587 "data_offset": 2048, 00:14:40.587 "data_size": 63488 00:14:40.587 } 00:14:40.587 ] 00:14:40.587 }' 00:14:40.587 14:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.587 14:47:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.156 14:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:41.156 14:47:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.156 14:47:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.156 [2024-12-09 14:47:19.053795] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:41.156 [2024-12-09 14:47:19.053864] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.156 [2024-12-09 14:47:19.053898] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:41.156 [2024-12-09 14:47:19.053909] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.156 [2024-12-09 14:47:19.054472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.156 [2024-12-09 14:47:19.054493] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:41.156 [2024-12-09 14:47:19.054618] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:41.156 [2024-12-09 14:47:19.054638] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:41.156 [2024-12-09 14:47:19.054654] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:41.156 [2024-12-09 14:47:19.054680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:41.156 [2024-12-09 14:47:19.072702] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:14:41.156 spare 00:14:41.156 14:47:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.156 14:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:41.156 [2024-12-09 14:47:19.074819] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:42.095 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:42.095 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.095 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:42.095 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:42.095 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.095 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.095 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.095 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.095 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.095 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.095 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.095 "name": "raid_bdev1", 00:14:42.095 "uuid": "64fb0921-c12a-4d6a-b4aa-5ff124698348", 00:14:42.095 "strip_size_kb": 0, 00:14:42.095 "state": "online", 00:14:42.095 "raid_level": "raid1", 00:14:42.095 "superblock": true, 00:14:42.095 "num_base_bdevs": 4, 00:14:42.095 "num_base_bdevs_discovered": 3, 00:14:42.095 "num_base_bdevs_operational": 3, 00:14:42.095 "process": { 00:14:42.095 "type": "rebuild", 00:14:42.095 "target": "spare", 00:14:42.095 "progress": { 00:14:42.095 "blocks": 20480, 00:14:42.095 "percent": 32 00:14:42.095 } 00:14:42.095 }, 00:14:42.095 "base_bdevs_list": [ 00:14:42.095 { 00:14:42.095 "name": "spare", 00:14:42.095 "uuid": "396c2572-82df-59ad-8f04-e81fdc0398d2", 00:14:42.095 "is_configured": true, 00:14:42.095 "data_offset": 2048, 00:14:42.095 "data_size": 63488 00:14:42.095 }, 00:14:42.095 { 00:14:42.095 "name": null, 00:14:42.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.095 "is_configured": false, 00:14:42.095 "data_offset": 2048, 00:14:42.095 "data_size": 63488 00:14:42.095 }, 00:14:42.095 { 00:14:42.095 "name": "BaseBdev3", 00:14:42.095 "uuid": "486ffbc9-8e6f-50f5-9278-a185ffdcaa7b", 00:14:42.095 "is_configured": true, 00:14:42.095 "data_offset": 2048, 00:14:42.095 "data_size": 63488 00:14:42.095 }, 00:14:42.095 { 00:14:42.095 "name": "BaseBdev4", 00:14:42.095 "uuid": "c8c4dbcf-caf2-53aa-aefc-84451cce8e8b", 00:14:42.095 "is_configured": true, 00:14:42.095 "data_offset": 2048, 00:14:42.095 "data_size": 63488 00:14:42.095 } 00:14:42.095 ] 00:14:42.095 }' 00:14:42.095 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.095 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:42.095 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.355 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:42.355 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:42.355 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.355 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.355 [2024-12-09 14:47:20.242632] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:42.355 [2024-12-09 14:47:20.280918] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:42.355 [2024-12-09 14:47:20.280990] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.355 [2024-12-09 14:47:20.281009] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:42.355 [2024-12-09 14:47:20.281020] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:42.355 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.355 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:42.355 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.355 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.355 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:42.355 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:42.355 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:42.355 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.355 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.355 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.355 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.355 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.355 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.355 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.355 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.355 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.355 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.355 "name": "raid_bdev1", 00:14:42.355 "uuid": "64fb0921-c12a-4d6a-b4aa-5ff124698348", 00:14:42.355 "strip_size_kb": 0, 00:14:42.355 "state": "online", 00:14:42.355 "raid_level": "raid1", 00:14:42.355 "superblock": true, 00:14:42.355 "num_base_bdevs": 4, 00:14:42.355 "num_base_bdevs_discovered": 2, 00:14:42.355 "num_base_bdevs_operational": 2, 00:14:42.355 "base_bdevs_list": [ 00:14:42.355 { 00:14:42.355 "name": null, 00:14:42.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.355 "is_configured": false, 00:14:42.355 "data_offset": 0, 00:14:42.355 "data_size": 63488 00:14:42.355 }, 00:14:42.355 { 00:14:42.355 "name": null, 00:14:42.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.355 "is_configured": false, 00:14:42.355 "data_offset": 2048, 00:14:42.355 "data_size": 63488 00:14:42.355 }, 00:14:42.355 { 00:14:42.355 "name": "BaseBdev3", 00:14:42.355 "uuid": "486ffbc9-8e6f-50f5-9278-a185ffdcaa7b", 00:14:42.355 "is_configured": true, 00:14:42.355 "data_offset": 2048, 00:14:42.355 "data_size": 63488 00:14:42.355 }, 00:14:42.355 { 00:14:42.355 "name": "BaseBdev4", 00:14:42.355 "uuid": "c8c4dbcf-caf2-53aa-aefc-84451cce8e8b", 00:14:42.355 "is_configured": true, 00:14:42.356 "data_offset": 2048, 00:14:42.356 "data_size": 63488 00:14:42.356 } 00:14:42.356 ] 00:14:42.356 }' 00:14:42.356 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.356 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.925 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:42.925 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.925 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:42.925 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:42.925 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.925 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.925 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.925 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.925 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.925 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.925 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.925 "name": "raid_bdev1", 00:14:42.925 "uuid": "64fb0921-c12a-4d6a-b4aa-5ff124698348", 00:14:42.925 "strip_size_kb": 0, 00:14:42.925 "state": "online", 00:14:42.925 "raid_level": "raid1", 00:14:42.925 "superblock": true, 00:14:42.925 "num_base_bdevs": 4, 00:14:42.925 "num_base_bdevs_discovered": 2, 00:14:42.925 "num_base_bdevs_operational": 2, 00:14:42.925 "base_bdevs_list": [ 00:14:42.925 { 00:14:42.925 "name": null, 00:14:42.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.925 "is_configured": false, 00:14:42.925 "data_offset": 0, 00:14:42.925 "data_size": 63488 00:14:42.925 }, 00:14:42.925 { 00:14:42.925 "name": null, 00:14:42.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.925 "is_configured": false, 00:14:42.925 "data_offset": 2048, 00:14:42.925 "data_size": 63488 00:14:42.925 }, 00:14:42.925 { 00:14:42.925 "name": "BaseBdev3", 00:14:42.925 "uuid": "486ffbc9-8e6f-50f5-9278-a185ffdcaa7b", 00:14:42.925 "is_configured": true, 00:14:42.925 "data_offset": 2048, 00:14:42.925 "data_size": 63488 00:14:42.925 }, 00:14:42.925 { 00:14:42.925 "name": "BaseBdev4", 00:14:42.925 "uuid": "c8c4dbcf-caf2-53aa-aefc-84451cce8e8b", 00:14:42.925 "is_configured": true, 00:14:42.925 "data_offset": 2048, 00:14:42.925 "data_size": 63488 00:14:42.925 } 00:14:42.925 ] 00:14:42.925 }' 00:14:42.925 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.925 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:42.925 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.925 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:42.925 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:42.925 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.925 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.925 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.925 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:42.925 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.925 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.925 [2024-12-09 14:47:20.946711] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:42.925 [2024-12-09 14:47:20.946782] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.925 [2024-12-09 14:47:20.946807] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:14:42.925 [2024-12-09 14:47:20.946820] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.925 [2024-12-09 14:47:20.947349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.925 [2024-12-09 14:47:20.947372] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:42.925 [2024-12-09 14:47:20.947466] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:42.925 [2024-12-09 14:47:20.947487] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:42.925 [2024-12-09 14:47:20.947498] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:42.925 [2024-12-09 14:47:20.947511] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:42.925 BaseBdev1 00:14:42.925 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.925 14:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:43.864 14:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:43.864 14:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.864 14:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.864 14:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:43.864 14:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:43.864 14:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:43.864 14:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.864 14:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.864 14:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.864 14:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.864 14:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.864 14:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.864 14:47:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.864 14:47:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.123 14:47:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.123 14:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.123 "name": "raid_bdev1", 00:14:44.123 "uuid": "64fb0921-c12a-4d6a-b4aa-5ff124698348", 00:14:44.123 "strip_size_kb": 0, 00:14:44.123 "state": "online", 00:14:44.123 "raid_level": "raid1", 00:14:44.123 "superblock": true, 00:14:44.123 "num_base_bdevs": 4, 00:14:44.123 "num_base_bdevs_discovered": 2, 00:14:44.123 "num_base_bdevs_operational": 2, 00:14:44.123 "base_bdevs_list": [ 00:14:44.123 { 00:14:44.123 "name": null, 00:14:44.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.123 "is_configured": false, 00:14:44.123 "data_offset": 0, 00:14:44.123 "data_size": 63488 00:14:44.123 }, 00:14:44.123 { 00:14:44.123 "name": null, 00:14:44.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.123 "is_configured": false, 00:14:44.123 "data_offset": 2048, 00:14:44.123 "data_size": 63488 00:14:44.123 }, 00:14:44.123 { 00:14:44.123 "name": "BaseBdev3", 00:14:44.123 "uuid": "486ffbc9-8e6f-50f5-9278-a185ffdcaa7b", 00:14:44.123 "is_configured": true, 00:14:44.123 "data_offset": 2048, 00:14:44.123 "data_size": 63488 00:14:44.123 }, 00:14:44.123 { 00:14:44.123 "name": "BaseBdev4", 00:14:44.123 "uuid": "c8c4dbcf-caf2-53aa-aefc-84451cce8e8b", 00:14:44.123 "is_configured": true, 00:14:44.123 "data_offset": 2048, 00:14:44.123 "data_size": 63488 00:14:44.123 } 00:14:44.123 ] 00:14:44.123 }' 00:14:44.123 14:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.124 14:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.431 14:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:44.431 14:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.431 14:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:44.431 14:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:44.431 14:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.431 14:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.431 14:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.431 14:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.431 14:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.431 14:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.431 14:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.431 "name": "raid_bdev1", 00:14:44.431 "uuid": "64fb0921-c12a-4d6a-b4aa-5ff124698348", 00:14:44.431 "strip_size_kb": 0, 00:14:44.431 "state": "online", 00:14:44.431 "raid_level": "raid1", 00:14:44.431 "superblock": true, 00:14:44.431 "num_base_bdevs": 4, 00:14:44.431 "num_base_bdevs_discovered": 2, 00:14:44.431 "num_base_bdevs_operational": 2, 00:14:44.431 "base_bdevs_list": [ 00:14:44.431 { 00:14:44.431 "name": null, 00:14:44.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.431 "is_configured": false, 00:14:44.431 "data_offset": 0, 00:14:44.431 "data_size": 63488 00:14:44.431 }, 00:14:44.431 { 00:14:44.431 "name": null, 00:14:44.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.431 "is_configured": false, 00:14:44.431 "data_offset": 2048, 00:14:44.431 "data_size": 63488 00:14:44.431 }, 00:14:44.431 { 00:14:44.431 "name": "BaseBdev3", 00:14:44.431 "uuid": "486ffbc9-8e6f-50f5-9278-a185ffdcaa7b", 00:14:44.431 "is_configured": true, 00:14:44.431 "data_offset": 2048, 00:14:44.431 "data_size": 63488 00:14:44.431 }, 00:14:44.431 { 00:14:44.431 "name": "BaseBdev4", 00:14:44.431 "uuid": "c8c4dbcf-caf2-53aa-aefc-84451cce8e8b", 00:14:44.431 "is_configured": true, 00:14:44.431 "data_offset": 2048, 00:14:44.431 "data_size": 63488 00:14:44.431 } 00:14:44.431 ] 00:14:44.431 }' 00:14:44.431 14:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.431 14:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:44.431 14:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.694 14:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:44.694 14:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:44.694 14:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:14:44.694 14:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:44.694 14:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:44.694 14:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:44.694 14:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:44.694 14:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:44.694 14:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:44.694 14:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.694 14:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.694 [2024-12-09 14:47:22.568331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:44.694 [2024-12-09 14:47:22.568711] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:44.694 [2024-12-09 14:47:22.568778] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:44.694 request: 00:14:44.694 { 00:14:44.694 "base_bdev": "BaseBdev1", 00:14:44.694 "raid_bdev": "raid_bdev1", 00:14:44.694 "method": "bdev_raid_add_base_bdev", 00:14:44.694 "req_id": 1 00:14:44.694 } 00:14:44.694 Got JSON-RPC error response 00:14:44.694 response: 00:14:44.694 { 00:14:44.694 "code": -22, 00:14:44.694 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:44.694 } 00:14:44.694 14:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:44.694 14:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:14:44.694 14:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:44.694 14:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:44.694 14:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:44.694 14:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:45.632 14:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:45.632 14:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.632 14:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.632 14:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:45.632 14:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:45.632 14:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:45.632 14:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.632 14:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.632 14:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.632 14:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.632 14:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.632 14:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.632 14:47:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.632 14:47:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.632 14:47:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.632 14:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.632 "name": "raid_bdev1", 00:14:45.632 "uuid": "64fb0921-c12a-4d6a-b4aa-5ff124698348", 00:14:45.632 "strip_size_kb": 0, 00:14:45.632 "state": "online", 00:14:45.632 "raid_level": "raid1", 00:14:45.632 "superblock": true, 00:14:45.632 "num_base_bdevs": 4, 00:14:45.632 "num_base_bdevs_discovered": 2, 00:14:45.632 "num_base_bdevs_operational": 2, 00:14:45.632 "base_bdevs_list": [ 00:14:45.632 { 00:14:45.632 "name": null, 00:14:45.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.632 "is_configured": false, 00:14:45.632 "data_offset": 0, 00:14:45.632 "data_size": 63488 00:14:45.632 }, 00:14:45.632 { 00:14:45.632 "name": null, 00:14:45.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.632 "is_configured": false, 00:14:45.632 "data_offset": 2048, 00:14:45.632 "data_size": 63488 00:14:45.632 }, 00:14:45.632 { 00:14:45.632 "name": "BaseBdev3", 00:14:45.632 "uuid": "486ffbc9-8e6f-50f5-9278-a185ffdcaa7b", 00:14:45.632 "is_configured": true, 00:14:45.632 "data_offset": 2048, 00:14:45.632 "data_size": 63488 00:14:45.632 }, 00:14:45.632 { 00:14:45.632 "name": "BaseBdev4", 00:14:45.632 "uuid": "c8c4dbcf-caf2-53aa-aefc-84451cce8e8b", 00:14:45.632 "is_configured": true, 00:14:45.632 "data_offset": 2048, 00:14:45.632 "data_size": 63488 00:14:45.632 } 00:14:45.632 ] 00:14:45.632 }' 00:14:45.632 14:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.632 14:47:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.891 14:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:45.891 14:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.892 14:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:45.892 14:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:45.892 14:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.149 14:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.149 14:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.149 14:47:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.149 14:47:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.149 14:47:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.149 14:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.149 "name": "raid_bdev1", 00:14:46.149 "uuid": "64fb0921-c12a-4d6a-b4aa-5ff124698348", 00:14:46.149 "strip_size_kb": 0, 00:14:46.149 "state": "online", 00:14:46.149 "raid_level": "raid1", 00:14:46.149 "superblock": true, 00:14:46.149 "num_base_bdevs": 4, 00:14:46.149 "num_base_bdevs_discovered": 2, 00:14:46.149 "num_base_bdevs_operational": 2, 00:14:46.149 "base_bdevs_list": [ 00:14:46.149 { 00:14:46.149 "name": null, 00:14:46.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.149 "is_configured": false, 00:14:46.149 "data_offset": 0, 00:14:46.149 "data_size": 63488 00:14:46.149 }, 00:14:46.149 { 00:14:46.149 "name": null, 00:14:46.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.149 "is_configured": false, 00:14:46.149 "data_offset": 2048, 00:14:46.149 "data_size": 63488 00:14:46.149 }, 00:14:46.149 { 00:14:46.149 "name": "BaseBdev3", 00:14:46.149 "uuid": "486ffbc9-8e6f-50f5-9278-a185ffdcaa7b", 00:14:46.149 "is_configured": true, 00:14:46.149 "data_offset": 2048, 00:14:46.149 "data_size": 63488 00:14:46.149 }, 00:14:46.149 { 00:14:46.149 "name": "BaseBdev4", 00:14:46.149 "uuid": "c8c4dbcf-caf2-53aa-aefc-84451cce8e8b", 00:14:46.149 "is_configured": true, 00:14:46.149 "data_offset": 2048, 00:14:46.149 "data_size": 63488 00:14:46.149 } 00:14:46.149 ] 00:14:46.149 }' 00:14:46.149 14:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.149 14:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:46.149 14:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.149 14:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:46.149 14:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 80473 00:14:46.149 14:47:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 80473 ']' 00:14:46.149 14:47:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 80473 00:14:46.149 14:47:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:14:46.149 14:47:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:46.149 14:47:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80473 00:14:46.149 14:47:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:46.149 14:47:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:46.149 killing process with pid 80473 00:14:46.149 Received shutdown signal, test time was about 18.172291 seconds 00:14:46.149 00:14:46.149 Latency(us) 00:14:46.149 [2024-12-09T14:47:24.271Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:46.149 [2024-12-09T14:47:24.271Z] =================================================================================================================== 00:14:46.149 [2024-12-09T14:47:24.271Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:46.149 14:47:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80473' 00:14:46.150 14:47:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 80473 00:14:46.150 [2024-12-09 14:47:24.203567] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:46.150 [2024-12-09 14:47:24.203713] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:46.150 14:47:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 80473 00:14:46.150 [2024-12-09 14:47:24.203788] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:46.150 [2024-12-09 14:47:24.203799] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:46.716 [2024-12-09 14:47:24.701820] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:48.094 14:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:48.094 00:14:48.094 real 0m21.983s 00:14:48.094 user 0m28.802s 00:14:48.094 sys 0m2.596s 00:14:48.094 14:47:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:48.094 14:47:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.094 ************************************ 00:14:48.094 END TEST raid_rebuild_test_sb_io 00:14:48.094 ************************************ 00:14:48.094 14:47:26 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:48.094 14:47:26 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:14:48.094 14:47:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:48.094 14:47:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:48.353 14:47:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:48.353 ************************************ 00:14:48.353 START TEST raid5f_state_function_test 00:14:48.353 ************************************ 00:14:48.353 14:47:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:14:48.353 14:47:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:48.353 14:47:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:48.353 14:47:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:48.353 14:47:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:48.353 14:47:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:48.353 14:47:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:48.353 14:47:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:48.353 14:47:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:48.353 14:47:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:48.353 14:47:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:48.353 14:47:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:48.353 14:47:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:48.353 14:47:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:48.353 14:47:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:48.353 14:47:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:48.353 14:47:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:48.353 14:47:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:48.353 14:47:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:48.353 14:47:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:48.353 14:47:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:48.353 14:47:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:48.353 14:47:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:48.353 14:47:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:48.353 14:47:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:48.353 14:47:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:48.353 14:47:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:48.353 14:47:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=81207 00:14:48.353 14:47:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81207' 00:14:48.353 14:47:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:48.353 Process raid pid: 81207 00:14:48.353 14:47:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 81207 00:14:48.353 14:47:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 81207 ']' 00:14:48.353 14:47:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.353 14:47:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:48.353 14:47:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.353 14:47:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:48.353 14:47:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.353 [2024-12-09 14:47:26.333016] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:14:48.353 [2024-12-09 14:47:26.333156] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:48.612 [2024-12-09 14:47:26.511996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.612 [2024-12-09 14:47:26.651852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.871 [2024-12-09 14:47:26.909858] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:48.871 [2024-12-09 14:47:26.909905] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:49.130 14:47:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:49.130 14:47:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:49.130 14:47:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:49.130 14:47:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.130 14:47:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.130 [2024-12-09 14:47:27.215784] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:49.130 [2024-12-09 14:47:27.215889] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:49.130 [2024-12-09 14:47:27.215923] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:49.130 [2024-12-09 14:47:27.215948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:49.130 [2024-12-09 14:47:27.215967] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:49.130 [2024-12-09 14:47:27.216023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:49.130 14:47:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.130 14:47:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:49.130 14:47:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.130 14:47:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:49.130 14:47:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:49.130 14:47:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.130 14:47:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:49.130 14:47:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.130 14:47:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.130 14:47:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.130 14:47:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.130 14:47:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.130 14:47:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.130 14:47:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.130 14:47:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.130 14:47:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.389 14:47:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.389 "name": "Existed_Raid", 00:14:49.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.389 "strip_size_kb": 64, 00:14:49.389 "state": "configuring", 00:14:49.389 "raid_level": "raid5f", 00:14:49.389 "superblock": false, 00:14:49.389 "num_base_bdevs": 3, 00:14:49.389 "num_base_bdevs_discovered": 0, 00:14:49.389 "num_base_bdevs_operational": 3, 00:14:49.389 "base_bdevs_list": [ 00:14:49.389 { 00:14:49.389 "name": "BaseBdev1", 00:14:49.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.389 "is_configured": false, 00:14:49.389 "data_offset": 0, 00:14:49.389 "data_size": 0 00:14:49.389 }, 00:14:49.389 { 00:14:49.389 "name": "BaseBdev2", 00:14:49.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.389 "is_configured": false, 00:14:49.389 "data_offset": 0, 00:14:49.389 "data_size": 0 00:14:49.389 }, 00:14:49.389 { 00:14:49.389 "name": "BaseBdev3", 00:14:49.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.389 "is_configured": false, 00:14:49.389 "data_offset": 0, 00:14:49.389 "data_size": 0 00:14:49.389 } 00:14:49.389 ] 00:14:49.389 }' 00:14:49.389 14:47:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.389 14:47:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.649 14:47:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:49.649 14:47:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.649 14:47:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.649 [2024-12-09 14:47:27.690942] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:49.649 [2024-12-09 14:47:27.691039] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:49.649 14:47:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.649 14:47:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:49.649 14:47:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.649 14:47:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.649 [2024-12-09 14:47:27.702915] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:49.649 [2024-12-09 14:47:27.703024] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:49.649 [2024-12-09 14:47:27.703060] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:49.649 [2024-12-09 14:47:27.703089] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:49.649 [2024-12-09 14:47:27.703157] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:49.649 [2024-12-09 14:47:27.703193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:49.649 14:47:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.649 14:47:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:49.649 14:47:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.649 14:47:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.649 [2024-12-09 14:47:27.757171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:49.649 BaseBdev1 00:14:49.649 14:47:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.649 14:47:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:49.649 14:47:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:49.649 14:47:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:49.649 14:47:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:49.649 14:47:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:49.649 14:47:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:49.649 14:47:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:49.649 14:47:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.649 14:47:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.908 14:47:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.908 14:47:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:49.908 14:47:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.908 14:47:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.908 [ 00:14:49.908 { 00:14:49.908 "name": "BaseBdev1", 00:14:49.908 "aliases": [ 00:14:49.908 "8b188649-ebfd-4d44-a1b9-37eb4d427ffa" 00:14:49.908 ], 00:14:49.908 "product_name": "Malloc disk", 00:14:49.908 "block_size": 512, 00:14:49.908 "num_blocks": 65536, 00:14:49.908 "uuid": "8b188649-ebfd-4d44-a1b9-37eb4d427ffa", 00:14:49.908 "assigned_rate_limits": { 00:14:49.908 "rw_ios_per_sec": 0, 00:14:49.908 "rw_mbytes_per_sec": 0, 00:14:49.908 "r_mbytes_per_sec": 0, 00:14:49.908 "w_mbytes_per_sec": 0 00:14:49.908 }, 00:14:49.908 "claimed": true, 00:14:49.908 "claim_type": "exclusive_write", 00:14:49.908 "zoned": false, 00:14:49.908 "supported_io_types": { 00:14:49.908 "read": true, 00:14:49.908 "write": true, 00:14:49.908 "unmap": true, 00:14:49.908 "flush": true, 00:14:49.908 "reset": true, 00:14:49.908 "nvme_admin": false, 00:14:49.908 "nvme_io": false, 00:14:49.908 "nvme_io_md": false, 00:14:49.908 "write_zeroes": true, 00:14:49.908 "zcopy": true, 00:14:49.908 "get_zone_info": false, 00:14:49.908 "zone_management": false, 00:14:49.908 "zone_append": false, 00:14:49.908 "compare": false, 00:14:49.908 "compare_and_write": false, 00:14:49.908 "abort": true, 00:14:49.908 "seek_hole": false, 00:14:49.908 "seek_data": false, 00:14:49.908 "copy": true, 00:14:49.908 "nvme_iov_md": false 00:14:49.908 }, 00:14:49.908 "memory_domains": [ 00:14:49.908 { 00:14:49.908 "dma_device_id": "system", 00:14:49.908 "dma_device_type": 1 00:14:49.908 }, 00:14:49.908 { 00:14:49.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.908 "dma_device_type": 2 00:14:49.908 } 00:14:49.908 ], 00:14:49.908 "driver_specific": {} 00:14:49.908 } 00:14:49.908 ] 00:14:49.908 14:47:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.908 14:47:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:49.908 14:47:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:49.908 14:47:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.908 14:47:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:49.908 14:47:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:49.908 14:47:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.908 14:47:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:49.908 14:47:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.908 14:47:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.908 14:47:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.908 14:47:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.908 14:47:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.908 14:47:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.908 14:47:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.908 14:47:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.908 14:47:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.908 14:47:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.908 "name": "Existed_Raid", 00:14:49.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.908 "strip_size_kb": 64, 00:14:49.908 "state": "configuring", 00:14:49.908 "raid_level": "raid5f", 00:14:49.908 "superblock": false, 00:14:49.908 "num_base_bdevs": 3, 00:14:49.908 "num_base_bdevs_discovered": 1, 00:14:49.908 "num_base_bdevs_operational": 3, 00:14:49.908 "base_bdevs_list": [ 00:14:49.908 { 00:14:49.908 "name": "BaseBdev1", 00:14:49.908 "uuid": "8b188649-ebfd-4d44-a1b9-37eb4d427ffa", 00:14:49.908 "is_configured": true, 00:14:49.908 "data_offset": 0, 00:14:49.908 "data_size": 65536 00:14:49.908 }, 00:14:49.908 { 00:14:49.908 "name": "BaseBdev2", 00:14:49.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.908 "is_configured": false, 00:14:49.908 "data_offset": 0, 00:14:49.908 "data_size": 0 00:14:49.908 }, 00:14:49.908 { 00:14:49.908 "name": "BaseBdev3", 00:14:49.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.908 "is_configured": false, 00:14:49.908 "data_offset": 0, 00:14:49.908 "data_size": 0 00:14:49.908 } 00:14:49.908 ] 00:14:49.908 }' 00:14:49.908 14:47:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.908 14:47:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.168 14:47:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:50.168 14:47:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.168 14:47:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.168 [2024-12-09 14:47:28.276418] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:50.168 [2024-12-09 14:47:28.276531] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:50.168 14:47:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.168 14:47:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:50.168 14:47:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.168 14:47:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.168 [2024-12-09 14:47:28.284455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:50.168 [2024-12-09 14:47:28.286709] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:50.168 [2024-12-09 14:47:28.286800] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:50.168 [2024-12-09 14:47:28.286845] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:50.168 [2024-12-09 14:47:28.286874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:50.168 14:47:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.168 14:47:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:50.168 14:47:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:50.168 14:47:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:50.426 14:47:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.426 14:47:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:50.426 14:47:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.426 14:47:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.426 14:47:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:50.426 14:47:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.426 14:47:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.426 14:47:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.426 14:47:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.426 14:47:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.426 14:47:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.426 14:47:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.426 14:47:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.426 14:47:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.426 14:47:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.426 "name": "Existed_Raid", 00:14:50.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.426 "strip_size_kb": 64, 00:14:50.426 "state": "configuring", 00:14:50.426 "raid_level": "raid5f", 00:14:50.426 "superblock": false, 00:14:50.426 "num_base_bdevs": 3, 00:14:50.426 "num_base_bdevs_discovered": 1, 00:14:50.426 "num_base_bdevs_operational": 3, 00:14:50.426 "base_bdevs_list": [ 00:14:50.426 { 00:14:50.426 "name": "BaseBdev1", 00:14:50.426 "uuid": "8b188649-ebfd-4d44-a1b9-37eb4d427ffa", 00:14:50.426 "is_configured": true, 00:14:50.426 "data_offset": 0, 00:14:50.426 "data_size": 65536 00:14:50.426 }, 00:14:50.426 { 00:14:50.426 "name": "BaseBdev2", 00:14:50.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.426 "is_configured": false, 00:14:50.426 "data_offset": 0, 00:14:50.426 "data_size": 0 00:14:50.426 }, 00:14:50.426 { 00:14:50.426 "name": "BaseBdev3", 00:14:50.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.426 "is_configured": false, 00:14:50.426 "data_offset": 0, 00:14:50.426 "data_size": 0 00:14:50.426 } 00:14:50.426 ] 00:14:50.426 }' 00:14:50.426 14:47:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.426 14:47:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.685 14:47:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:50.685 14:47:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.685 14:47:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.943 [2024-12-09 14:47:28.823945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:50.943 BaseBdev2 00:14:50.943 14:47:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.943 14:47:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:50.943 14:47:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:50.943 14:47:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:50.943 14:47:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:50.943 14:47:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:50.943 14:47:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:50.943 14:47:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:50.943 14:47:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.944 14:47:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.944 14:47:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.944 14:47:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:50.944 14:47:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.944 14:47:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.944 [ 00:14:50.944 { 00:14:50.944 "name": "BaseBdev2", 00:14:50.944 "aliases": [ 00:14:50.944 "81ec2140-fd12-4cad-965e-90e07fb0bbef" 00:14:50.944 ], 00:14:50.944 "product_name": "Malloc disk", 00:14:50.944 "block_size": 512, 00:14:50.944 "num_blocks": 65536, 00:14:50.944 "uuid": "81ec2140-fd12-4cad-965e-90e07fb0bbef", 00:14:50.944 "assigned_rate_limits": { 00:14:50.944 "rw_ios_per_sec": 0, 00:14:50.944 "rw_mbytes_per_sec": 0, 00:14:50.944 "r_mbytes_per_sec": 0, 00:14:50.944 "w_mbytes_per_sec": 0 00:14:50.944 }, 00:14:50.944 "claimed": true, 00:14:50.944 "claim_type": "exclusive_write", 00:14:50.944 "zoned": false, 00:14:50.944 "supported_io_types": { 00:14:50.944 "read": true, 00:14:50.944 "write": true, 00:14:50.944 "unmap": true, 00:14:50.944 "flush": true, 00:14:50.944 "reset": true, 00:14:50.944 "nvme_admin": false, 00:14:50.944 "nvme_io": false, 00:14:50.944 "nvme_io_md": false, 00:14:50.944 "write_zeroes": true, 00:14:50.944 "zcopy": true, 00:14:50.944 "get_zone_info": false, 00:14:50.944 "zone_management": false, 00:14:50.944 "zone_append": false, 00:14:50.944 "compare": false, 00:14:50.944 "compare_and_write": false, 00:14:50.944 "abort": true, 00:14:50.944 "seek_hole": false, 00:14:50.944 "seek_data": false, 00:14:50.944 "copy": true, 00:14:50.944 "nvme_iov_md": false 00:14:50.944 }, 00:14:50.944 "memory_domains": [ 00:14:50.944 { 00:14:50.944 "dma_device_id": "system", 00:14:50.944 "dma_device_type": 1 00:14:50.944 }, 00:14:50.944 { 00:14:50.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.944 "dma_device_type": 2 00:14:50.944 } 00:14:50.944 ], 00:14:50.944 "driver_specific": {} 00:14:50.944 } 00:14:50.944 ] 00:14:50.944 14:47:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.944 14:47:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:50.944 14:47:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:50.944 14:47:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:50.944 14:47:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:50.944 14:47:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.944 14:47:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:50.944 14:47:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.944 14:47:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.944 14:47:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:50.944 14:47:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.944 14:47:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.944 14:47:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.944 14:47:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.944 14:47:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.944 14:47:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.944 14:47:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.944 14:47:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.944 14:47:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.944 14:47:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.944 "name": "Existed_Raid", 00:14:50.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.944 "strip_size_kb": 64, 00:14:50.944 "state": "configuring", 00:14:50.944 "raid_level": "raid5f", 00:14:50.944 "superblock": false, 00:14:50.944 "num_base_bdevs": 3, 00:14:50.944 "num_base_bdevs_discovered": 2, 00:14:50.944 "num_base_bdevs_operational": 3, 00:14:50.944 "base_bdevs_list": [ 00:14:50.944 { 00:14:50.944 "name": "BaseBdev1", 00:14:50.944 "uuid": "8b188649-ebfd-4d44-a1b9-37eb4d427ffa", 00:14:50.944 "is_configured": true, 00:14:50.944 "data_offset": 0, 00:14:50.944 "data_size": 65536 00:14:50.944 }, 00:14:50.944 { 00:14:50.944 "name": "BaseBdev2", 00:14:50.944 "uuid": "81ec2140-fd12-4cad-965e-90e07fb0bbef", 00:14:50.944 "is_configured": true, 00:14:50.944 "data_offset": 0, 00:14:50.944 "data_size": 65536 00:14:50.944 }, 00:14:50.944 { 00:14:50.944 "name": "BaseBdev3", 00:14:50.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.944 "is_configured": false, 00:14:50.944 "data_offset": 0, 00:14:50.944 "data_size": 0 00:14:50.944 } 00:14:50.944 ] 00:14:50.944 }' 00:14:50.944 14:47:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.944 14:47:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.511 14:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:51.511 14:47:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.511 14:47:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.511 [2024-12-09 14:47:29.402603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:51.511 [2024-12-09 14:47:29.402715] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:51.511 [2024-12-09 14:47:29.402748] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:51.511 [2024-12-09 14:47:29.403056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:51.511 [2024-12-09 14:47:29.408685] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:51.511 BaseBdev3 00:14:51.511 [2024-12-09 14:47:29.408741] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:51.511 [2024-12-09 14:47:29.409019] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.511 14:47:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.511 14:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:51.511 14:47:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:51.511 14:47:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:51.511 14:47:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:51.511 14:47:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:51.511 14:47:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:51.511 14:47:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:51.511 14:47:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.511 14:47:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.511 14:47:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.511 14:47:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:51.511 14:47:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.511 14:47:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.511 [ 00:14:51.511 { 00:14:51.511 "name": "BaseBdev3", 00:14:51.511 "aliases": [ 00:14:51.511 "a817fadf-4866-492e-8cd5-791c79236cd9" 00:14:51.511 ], 00:14:51.511 "product_name": "Malloc disk", 00:14:51.511 "block_size": 512, 00:14:51.511 "num_blocks": 65536, 00:14:51.511 "uuid": "a817fadf-4866-492e-8cd5-791c79236cd9", 00:14:51.511 "assigned_rate_limits": { 00:14:51.511 "rw_ios_per_sec": 0, 00:14:51.511 "rw_mbytes_per_sec": 0, 00:14:51.511 "r_mbytes_per_sec": 0, 00:14:51.511 "w_mbytes_per_sec": 0 00:14:51.511 }, 00:14:51.511 "claimed": true, 00:14:51.511 "claim_type": "exclusive_write", 00:14:51.511 "zoned": false, 00:14:51.511 "supported_io_types": { 00:14:51.511 "read": true, 00:14:51.511 "write": true, 00:14:51.511 "unmap": true, 00:14:51.511 "flush": true, 00:14:51.511 "reset": true, 00:14:51.511 "nvme_admin": false, 00:14:51.511 "nvme_io": false, 00:14:51.511 "nvme_io_md": false, 00:14:51.511 "write_zeroes": true, 00:14:51.511 "zcopy": true, 00:14:51.511 "get_zone_info": false, 00:14:51.511 "zone_management": false, 00:14:51.511 "zone_append": false, 00:14:51.511 "compare": false, 00:14:51.511 "compare_and_write": false, 00:14:51.511 "abort": true, 00:14:51.511 "seek_hole": false, 00:14:51.511 "seek_data": false, 00:14:51.511 "copy": true, 00:14:51.511 "nvme_iov_md": false 00:14:51.511 }, 00:14:51.511 "memory_domains": [ 00:14:51.511 { 00:14:51.511 "dma_device_id": "system", 00:14:51.511 "dma_device_type": 1 00:14:51.511 }, 00:14:51.511 { 00:14:51.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.511 "dma_device_type": 2 00:14:51.511 } 00:14:51.511 ], 00:14:51.511 "driver_specific": {} 00:14:51.511 } 00:14:51.511 ] 00:14:51.511 14:47:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.511 14:47:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:51.511 14:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:51.511 14:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:51.511 14:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:51.511 14:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.511 14:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.511 14:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.511 14:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.511 14:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:51.511 14:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.511 14:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.511 14:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.511 14:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.511 14:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.511 14:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.511 14:47:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.511 14:47:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.511 14:47:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.511 14:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.511 "name": "Existed_Raid", 00:14:51.511 "uuid": "ea6e7d07-15e0-4ae9-bc27-929c272f56e4", 00:14:51.511 "strip_size_kb": 64, 00:14:51.511 "state": "online", 00:14:51.511 "raid_level": "raid5f", 00:14:51.511 "superblock": false, 00:14:51.511 "num_base_bdevs": 3, 00:14:51.511 "num_base_bdevs_discovered": 3, 00:14:51.511 "num_base_bdevs_operational": 3, 00:14:51.511 "base_bdevs_list": [ 00:14:51.511 { 00:14:51.511 "name": "BaseBdev1", 00:14:51.511 "uuid": "8b188649-ebfd-4d44-a1b9-37eb4d427ffa", 00:14:51.511 "is_configured": true, 00:14:51.511 "data_offset": 0, 00:14:51.511 "data_size": 65536 00:14:51.511 }, 00:14:51.511 { 00:14:51.511 "name": "BaseBdev2", 00:14:51.511 "uuid": "81ec2140-fd12-4cad-965e-90e07fb0bbef", 00:14:51.511 "is_configured": true, 00:14:51.511 "data_offset": 0, 00:14:51.511 "data_size": 65536 00:14:51.511 }, 00:14:51.511 { 00:14:51.511 "name": "BaseBdev3", 00:14:51.511 "uuid": "a817fadf-4866-492e-8cd5-791c79236cd9", 00:14:51.511 "is_configured": true, 00:14:51.511 "data_offset": 0, 00:14:51.511 "data_size": 65536 00:14:51.511 } 00:14:51.511 ] 00:14:51.511 }' 00:14:51.511 14:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.512 14:47:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.077 14:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:52.077 14:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:52.077 14:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:52.077 14:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:52.078 14:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:52.078 14:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:52.078 14:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:52.078 14:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:52.078 14:47:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.078 14:47:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.078 [2024-12-09 14:47:29.958747] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:52.078 14:47:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.078 14:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:52.078 "name": "Existed_Raid", 00:14:52.078 "aliases": [ 00:14:52.078 "ea6e7d07-15e0-4ae9-bc27-929c272f56e4" 00:14:52.078 ], 00:14:52.078 "product_name": "Raid Volume", 00:14:52.078 "block_size": 512, 00:14:52.078 "num_blocks": 131072, 00:14:52.078 "uuid": "ea6e7d07-15e0-4ae9-bc27-929c272f56e4", 00:14:52.078 "assigned_rate_limits": { 00:14:52.078 "rw_ios_per_sec": 0, 00:14:52.078 "rw_mbytes_per_sec": 0, 00:14:52.078 "r_mbytes_per_sec": 0, 00:14:52.078 "w_mbytes_per_sec": 0 00:14:52.078 }, 00:14:52.078 "claimed": false, 00:14:52.078 "zoned": false, 00:14:52.078 "supported_io_types": { 00:14:52.078 "read": true, 00:14:52.078 "write": true, 00:14:52.078 "unmap": false, 00:14:52.078 "flush": false, 00:14:52.078 "reset": true, 00:14:52.078 "nvme_admin": false, 00:14:52.078 "nvme_io": false, 00:14:52.078 "nvme_io_md": false, 00:14:52.078 "write_zeroes": true, 00:14:52.078 "zcopy": false, 00:14:52.078 "get_zone_info": false, 00:14:52.078 "zone_management": false, 00:14:52.078 "zone_append": false, 00:14:52.078 "compare": false, 00:14:52.078 "compare_and_write": false, 00:14:52.078 "abort": false, 00:14:52.078 "seek_hole": false, 00:14:52.078 "seek_data": false, 00:14:52.078 "copy": false, 00:14:52.078 "nvme_iov_md": false 00:14:52.078 }, 00:14:52.078 "driver_specific": { 00:14:52.078 "raid": { 00:14:52.078 "uuid": "ea6e7d07-15e0-4ae9-bc27-929c272f56e4", 00:14:52.078 "strip_size_kb": 64, 00:14:52.078 "state": "online", 00:14:52.078 "raid_level": "raid5f", 00:14:52.078 "superblock": false, 00:14:52.078 "num_base_bdevs": 3, 00:14:52.078 "num_base_bdevs_discovered": 3, 00:14:52.078 "num_base_bdevs_operational": 3, 00:14:52.078 "base_bdevs_list": [ 00:14:52.078 { 00:14:52.078 "name": "BaseBdev1", 00:14:52.078 "uuid": "8b188649-ebfd-4d44-a1b9-37eb4d427ffa", 00:14:52.078 "is_configured": true, 00:14:52.078 "data_offset": 0, 00:14:52.078 "data_size": 65536 00:14:52.078 }, 00:14:52.078 { 00:14:52.078 "name": "BaseBdev2", 00:14:52.078 "uuid": "81ec2140-fd12-4cad-965e-90e07fb0bbef", 00:14:52.078 "is_configured": true, 00:14:52.078 "data_offset": 0, 00:14:52.078 "data_size": 65536 00:14:52.078 }, 00:14:52.078 { 00:14:52.078 "name": "BaseBdev3", 00:14:52.078 "uuid": "a817fadf-4866-492e-8cd5-791c79236cd9", 00:14:52.078 "is_configured": true, 00:14:52.078 "data_offset": 0, 00:14:52.078 "data_size": 65536 00:14:52.078 } 00:14:52.078 ] 00:14:52.078 } 00:14:52.078 } 00:14:52.078 }' 00:14:52.078 14:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:52.078 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:52.078 BaseBdev2 00:14:52.078 BaseBdev3' 00:14:52.078 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:52.078 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:52.078 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:52.078 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:52.078 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:52.078 14:47:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.078 14:47:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.078 14:47:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.078 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:52.078 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:52.078 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:52.078 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:52.078 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:52.078 14:47:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.078 14:47:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.078 14:47:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.078 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:52.078 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:52.078 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:52.337 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:52.337 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:52.337 14:47:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.337 14:47:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.337 14:47:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.337 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:52.337 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:52.337 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:52.337 14:47:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.337 14:47:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.337 [2024-12-09 14:47:30.238065] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:52.337 14:47:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.337 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:52.337 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:52.337 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:52.337 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:52.337 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:52.337 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:52.337 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.337 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.338 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.338 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.338 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:52.338 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.338 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.338 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.338 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.338 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.338 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.338 14:47:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.338 14:47:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.338 14:47:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.338 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.338 "name": "Existed_Raid", 00:14:52.338 "uuid": "ea6e7d07-15e0-4ae9-bc27-929c272f56e4", 00:14:52.338 "strip_size_kb": 64, 00:14:52.338 "state": "online", 00:14:52.338 "raid_level": "raid5f", 00:14:52.338 "superblock": false, 00:14:52.338 "num_base_bdevs": 3, 00:14:52.338 "num_base_bdevs_discovered": 2, 00:14:52.338 "num_base_bdevs_operational": 2, 00:14:52.338 "base_bdevs_list": [ 00:14:52.338 { 00:14:52.338 "name": null, 00:14:52.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.338 "is_configured": false, 00:14:52.338 "data_offset": 0, 00:14:52.338 "data_size": 65536 00:14:52.338 }, 00:14:52.338 { 00:14:52.338 "name": "BaseBdev2", 00:14:52.338 "uuid": "81ec2140-fd12-4cad-965e-90e07fb0bbef", 00:14:52.338 "is_configured": true, 00:14:52.338 "data_offset": 0, 00:14:52.338 "data_size": 65536 00:14:52.338 }, 00:14:52.338 { 00:14:52.338 "name": "BaseBdev3", 00:14:52.338 "uuid": "a817fadf-4866-492e-8cd5-791c79236cd9", 00:14:52.338 "is_configured": true, 00:14:52.338 "data_offset": 0, 00:14:52.338 "data_size": 65536 00:14:52.338 } 00:14:52.338 ] 00:14:52.338 }' 00:14:52.338 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.338 14:47:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.906 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:52.906 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:52.906 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:52.906 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.906 14:47:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.906 14:47:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.906 14:47:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.906 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:52.906 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:52.906 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:52.906 14:47:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.906 14:47:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.906 [2024-12-09 14:47:30.876073] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:52.906 [2024-12-09 14:47:30.876194] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:52.906 [2024-12-09 14:47:30.972006] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:52.906 14:47:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.906 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:52.906 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:52.906 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.906 14:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:52.906 14:47:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.906 14:47:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.906 14:47:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.906 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:52.906 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:52.906 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:52.906 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.165 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.165 [2024-12-09 14:47:31.031973] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:53.165 [2024-12-09 14:47:31.032073] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:53.165 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.165 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:53.165 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:53.165 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.165 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:53.165 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.165 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.165 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.165 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:53.165 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:53.165 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:53.165 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:53.165 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:53.165 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:53.165 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.165 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.165 BaseBdev2 00:14:53.165 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.165 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:53.165 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:53.165 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:53.165 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:53.165 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:53.165 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:53.165 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:53.165 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.165 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.165 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.165 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:53.165 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.165 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.165 [ 00:14:53.165 { 00:14:53.165 "name": "BaseBdev2", 00:14:53.165 "aliases": [ 00:14:53.165 "16b57f3e-6f97-408a-adf4-c0b45ae81025" 00:14:53.165 ], 00:14:53.165 "product_name": "Malloc disk", 00:14:53.165 "block_size": 512, 00:14:53.165 "num_blocks": 65536, 00:14:53.165 "uuid": "16b57f3e-6f97-408a-adf4-c0b45ae81025", 00:14:53.165 "assigned_rate_limits": { 00:14:53.165 "rw_ios_per_sec": 0, 00:14:53.165 "rw_mbytes_per_sec": 0, 00:14:53.165 "r_mbytes_per_sec": 0, 00:14:53.165 "w_mbytes_per_sec": 0 00:14:53.165 }, 00:14:53.165 "claimed": false, 00:14:53.165 "zoned": false, 00:14:53.165 "supported_io_types": { 00:14:53.165 "read": true, 00:14:53.165 "write": true, 00:14:53.165 "unmap": true, 00:14:53.165 "flush": true, 00:14:53.165 "reset": true, 00:14:53.165 "nvme_admin": false, 00:14:53.165 "nvme_io": false, 00:14:53.165 "nvme_io_md": false, 00:14:53.165 "write_zeroes": true, 00:14:53.165 "zcopy": true, 00:14:53.165 "get_zone_info": false, 00:14:53.165 "zone_management": false, 00:14:53.165 "zone_append": false, 00:14:53.165 "compare": false, 00:14:53.165 "compare_and_write": false, 00:14:53.165 "abort": true, 00:14:53.165 "seek_hole": false, 00:14:53.165 "seek_data": false, 00:14:53.165 "copy": true, 00:14:53.165 "nvme_iov_md": false 00:14:53.165 }, 00:14:53.165 "memory_domains": [ 00:14:53.165 { 00:14:53.165 "dma_device_id": "system", 00:14:53.165 "dma_device_type": 1 00:14:53.165 }, 00:14:53.165 { 00:14:53.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.165 "dma_device_type": 2 00:14:53.165 } 00:14:53.165 ], 00:14:53.165 "driver_specific": {} 00:14:53.165 } 00:14:53.165 ] 00:14:53.165 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.165 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:53.165 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:53.165 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:53.165 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:53.165 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.165 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.424 BaseBdev3 00:14:53.424 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.424 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:53.424 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:53.424 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:53.424 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:53.424 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:53.424 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:53.424 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:53.424 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.424 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.424 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.424 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:53.424 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.424 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.424 [ 00:14:53.424 { 00:14:53.424 "name": "BaseBdev3", 00:14:53.424 "aliases": [ 00:14:53.424 "dd2057fc-7bdc-4b27-9d4f-df8e35ab0830" 00:14:53.424 ], 00:14:53.424 "product_name": "Malloc disk", 00:14:53.424 "block_size": 512, 00:14:53.424 "num_blocks": 65536, 00:14:53.424 "uuid": "dd2057fc-7bdc-4b27-9d4f-df8e35ab0830", 00:14:53.424 "assigned_rate_limits": { 00:14:53.424 "rw_ios_per_sec": 0, 00:14:53.424 "rw_mbytes_per_sec": 0, 00:14:53.424 "r_mbytes_per_sec": 0, 00:14:53.424 "w_mbytes_per_sec": 0 00:14:53.424 }, 00:14:53.424 "claimed": false, 00:14:53.424 "zoned": false, 00:14:53.424 "supported_io_types": { 00:14:53.424 "read": true, 00:14:53.424 "write": true, 00:14:53.424 "unmap": true, 00:14:53.424 "flush": true, 00:14:53.424 "reset": true, 00:14:53.424 "nvme_admin": false, 00:14:53.424 "nvme_io": false, 00:14:53.424 "nvme_io_md": false, 00:14:53.424 "write_zeroes": true, 00:14:53.424 "zcopy": true, 00:14:53.424 "get_zone_info": false, 00:14:53.424 "zone_management": false, 00:14:53.424 "zone_append": false, 00:14:53.424 "compare": false, 00:14:53.424 "compare_and_write": false, 00:14:53.424 "abort": true, 00:14:53.424 "seek_hole": false, 00:14:53.424 "seek_data": false, 00:14:53.424 "copy": true, 00:14:53.424 "nvme_iov_md": false 00:14:53.424 }, 00:14:53.424 "memory_domains": [ 00:14:53.424 { 00:14:53.424 "dma_device_id": "system", 00:14:53.424 "dma_device_type": 1 00:14:53.424 }, 00:14:53.424 { 00:14:53.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.424 "dma_device_type": 2 00:14:53.424 } 00:14:53.424 ], 00:14:53.424 "driver_specific": {} 00:14:53.424 } 00:14:53.424 ] 00:14:53.424 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.424 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:53.424 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:53.424 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:53.424 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:53.424 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.424 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.424 [2024-12-09 14:47:31.345844] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:53.424 [2024-12-09 14:47:31.345933] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:53.424 [2024-12-09 14:47:31.345976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:53.424 [2024-12-09 14:47:31.347764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:53.424 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.424 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:53.424 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.424 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:53.424 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:53.424 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.424 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:53.425 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.425 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.425 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.425 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.425 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.425 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.425 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.425 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.425 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.425 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.425 "name": "Existed_Raid", 00:14:53.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.425 "strip_size_kb": 64, 00:14:53.425 "state": "configuring", 00:14:53.425 "raid_level": "raid5f", 00:14:53.425 "superblock": false, 00:14:53.425 "num_base_bdevs": 3, 00:14:53.425 "num_base_bdevs_discovered": 2, 00:14:53.425 "num_base_bdevs_operational": 3, 00:14:53.425 "base_bdevs_list": [ 00:14:53.425 { 00:14:53.425 "name": "BaseBdev1", 00:14:53.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.425 "is_configured": false, 00:14:53.425 "data_offset": 0, 00:14:53.425 "data_size": 0 00:14:53.425 }, 00:14:53.425 { 00:14:53.425 "name": "BaseBdev2", 00:14:53.425 "uuid": "16b57f3e-6f97-408a-adf4-c0b45ae81025", 00:14:53.425 "is_configured": true, 00:14:53.425 "data_offset": 0, 00:14:53.425 "data_size": 65536 00:14:53.425 }, 00:14:53.425 { 00:14:53.425 "name": "BaseBdev3", 00:14:53.425 "uuid": "dd2057fc-7bdc-4b27-9d4f-df8e35ab0830", 00:14:53.425 "is_configured": true, 00:14:53.425 "data_offset": 0, 00:14:53.425 "data_size": 65536 00:14:53.425 } 00:14:53.425 ] 00:14:53.425 }' 00:14:53.425 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.425 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.684 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:53.684 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.684 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.684 [2024-12-09 14:47:31.805122] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:53.943 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.943 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:53.943 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.943 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:53.943 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:53.943 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.943 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:53.943 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.943 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.943 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.943 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.943 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.943 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.943 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.943 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.943 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.943 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.943 "name": "Existed_Raid", 00:14:53.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.943 "strip_size_kb": 64, 00:14:53.943 "state": "configuring", 00:14:53.943 "raid_level": "raid5f", 00:14:53.943 "superblock": false, 00:14:53.943 "num_base_bdevs": 3, 00:14:53.943 "num_base_bdevs_discovered": 1, 00:14:53.943 "num_base_bdevs_operational": 3, 00:14:53.943 "base_bdevs_list": [ 00:14:53.943 { 00:14:53.943 "name": "BaseBdev1", 00:14:53.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.943 "is_configured": false, 00:14:53.943 "data_offset": 0, 00:14:53.943 "data_size": 0 00:14:53.943 }, 00:14:53.943 { 00:14:53.943 "name": null, 00:14:53.943 "uuid": "16b57f3e-6f97-408a-adf4-c0b45ae81025", 00:14:53.943 "is_configured": false, 00:14:53.943 "data_offset": 0, 00:14:53.943 "data_size": 65536 00:14:53.943 }, 00:14:53.943 { 00:14:53.943 "name": "BaseBdev3", 00:14:53.943 "uuid": "dd2057fc-7bdc-4b27-9d4f-df8e35ab0830", 00:14:53.943 "is_configured": true, 00:14:53.943 "data_offset": 0, 00:14:53.943 "data_size": 65536 00:14:53.943 } 00:14:53.943 ] 00:14:53.943 }' 00:14:53.943 14:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.943 14:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.202 14:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:54.202 14:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.202 14:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.202 14:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.202 14:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.202 14:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:54.202 14:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:54.202 14:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.202 14:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.461 [2024-12-09 14:47:32.345310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:54.461 BaseBdev1 00:14:54.461 14:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.461 14:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:54.462 14:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:54.462 14:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:54.462 14:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:54.462 14:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:54.462 14:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:54.462 14:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:54.462 14:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.462 14:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.462 14:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.462 14:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:54.462 14:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.462 14:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.462 [ 00:14:54.462 { 00:14:54.462 "name": "BaseBdev1", 00:14:54.462 "aliases": [ 00:14:54.462 "74076221-8de5-4fb1-a571-d7fbeca960be" 00:14:54.462 ], 00:14:54.462 "product_name": "Malloc disk", 00:14:54.462 "block_size": 512, 00:14:54.462 "num_blocks": 65536, 00:14:54.462 "uuid": "74076221-8de5-4fb1-a571-d7fbeca960be", 00:14:54.462 "assigned_rate_limits": { 00:14:54.462 "rw_ios_per_sec": 0, 00:14:54.462 "rw_mbytes_per_sec": 0, 00:14:54.462 "r_mbytes_per_sec": 0, 00:14:54.462 "w_mbytes_per_sec": 0 00:14:54.462 }, 00:14:54.462 "claimed": true, 00:14:54.462 "claim_type": "exclusive_write", 00:14:54.462 "zoned": false, 00:14:54.462 "supported_io_types": { 00:14:54.462 "read": true, 00:14:54.462 "write": true, 00:14:54.462 "unmap": true, 00:14:54.462 "flush": true, 00:14:54.462 "reset": true, 00:14:54.462 "nvme_admin": false, 00:14:54.462 "nvme_io": false, 00:14:54.462 "nvme_io_md": false, 00:14:54.462 "write_zeroes": true, 00:14:54.462 "zcopy": true, 00:14:54.462 "get_zone_info": false, 00:14:54.462 "zone_management": false, 00:14:54.462 "zone_append": false, 00:14:54.462 "compare": false, 00:14:54.462 "compare_and_write": false, 00:14:54.462 "abort": true, 00:14:54.462 "seek_hole": false, 00:14:54.462 "seek_data": false, 00:14:54.462 "copy": true, 00:14:54.462 "nvme_iov_md": false 00:14:54.462 }, 00:14:54.462 "memory_domains": [ 00:14:54.462 { 00:14:54.462 "dma_device_id": "system", 00:14:54.462 "dma_device_type": 1 00:14:54.462 }, 00:14:54.462 { 00:14:54.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.462 "dma_device_type": 2 00:14:54.462 } 00:14:54.462 ], 00:14:54.462 "driver_specific": {} 00:14:54.462 } 00:14:54.462 ] 00:14:54.462 14:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.462 14:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:54.462 14:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:54.462 14:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.462 14:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.462 14:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:54.462 14:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.462 14:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:54.462 14:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.462 14:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.462 14:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.462 14:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.462 14:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.462 14:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.462 14:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.462 14:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.462 14:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.462 14:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.462 "name": "Existed_Raid", 00:14:54.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.462 "strip_size_kb": 64, 00:14:54.462 "state": "configuring", 00:14:54.462 "raid_level": "raid5f", 00:14:54.462 "superblock": false, 00:14:54.462 "num_base_bdevs": 3, 00:14:54.462 "num_base_bdevs_discovered": 2, 00:14:54.462 "num_base_bdevs_operational": 3, 00:14:54.462 "base_bdevs_list": [ 00:14:54.462 { 00:14:54.462 "name": "BaseBdev1", 00:14:54.462 "uuid": "74076221-8de5-4fb1-a571-d7fbeca960be", 00:14:54.462 "is_configured": true, 00:14:54.462 "data_offset": 0, 00:14:54.462 "data_size": 65536 00:14:54.462 }, 00:14:54.462 { 00:14:54.462 "name": null, 00:14:54.462 "uuid": "16b57f3e-6f97-408a-adf4-c0b45ae81025", 00:14:54.462 "is_configured": false, 00:14:54.462 "data_offset": 0, 00:14:54.462 "data_size": 65536 00:14:54.462 }, 00:14:54.462 { 00:14:54.462 "name": "BaseBdev3", 00:14:54.462 "uuid": "dd2057fc-7bdc-4b27-9d4f-df8e35ab0830", 00:14:54.462 "is_configured": true, 00:14:54.462 "data_offset": 0, 00:14:54.462 "data_size": 65536 00:14:54.462 } 00:14:54.462 ] 00:14:54.462 }' 00:14:54.462 14:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.462 14:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.030 14:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:55.030 14:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.030 14:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.030 14:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.030 14:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.030 14:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:55.030 14:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:55.030 14:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.030 14:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.030 [2024-12-09 14:47:32.896431] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:55.030 14:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.030 14:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:55.030 14:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.030 14:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.030 14:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.030 14:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.030 14:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:55.030 14:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.030 14:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.030 14:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.030 14:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.030 14:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.030 14:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.030 14:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.030 14:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.030 14:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.030 14:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.030 "name": "Existed_Raid", 00:14:55.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.030 "strip_size_kb": 64, 00:14:55.030 "state": "configuring", 00:14:55.030 "raid_level": "raid5f", 00:14:55.030 "superblock": false, 00:14:55.030 "num_base_bdevs": 3, 00:14:55.030 "num_base_bdevs_discovered": 1, 00:14:55.030 "num_base_bdevs_operational": 3, 00:14:55.030 "base_bdevs_list": [ 00:14:55.030 { 00:14:55.030 "name": "BaseBdev1", 00:14:55.030 "uuid": "74076221-8de5-4fb1-a571-d7fbeca960be", 00:14:55.030 "is_configured": true, 00:14:55.030 "data_offset": 0, 00:14:55.030 "data_size": 65536 00:14:55.030 }, 00:14:55.030 { 00:14:55.030 "name": null, 00:14:55.030 "uuid": "16b57f3e-6f97-408a-adf4-c0b45ae81025", 00:14:55.030 "is_configured": false, 00:14:55.030 "data_offset": 0, 00:14:55.030 "data_size": 65536 00:14:55.030 }, 00:14:55.030 { 00:14:55.030 "name": null, 00:14:55.030 "uuid": "dd2057fc-7bdc-4b27-9d4f-df8e35ab0830", 00:14:55.030 "is_configured": false, 00:14:55.030 "data_offset": 0, 00:14:55.030 "data_size": 65536 00:14:55.030 } 00:14:55.030 ] 00:14:55.030 }' 00:14:55.030 14:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.030 14:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.290 14:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:55.290 14:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.290 14:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.290 14:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.290 14:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.290 14:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:55.290 14:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:55.290 14:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.290 14:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.290 [2024-12-09 14:47:33.355734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:55.290 14:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.290 14:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:55.290 14:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.290 14:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.290 14:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.290 14:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.290 14:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:55.290 14:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.290 14:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.290 14:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.290 14:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.290 14:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.290 14:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.290 14:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.290 14:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.290 14:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.549 14:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.549 "name": "Existed_Raid", 00:14:55.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.549 "strip_size_kb": 64, 00:14:55.549 "state": "configuring", 00:14:55.549 "raid_level": "raid5f", 00:14:55.549 "superblock": false, 00:14:55.549 "num_base_bdevs": 3, 00:14:55.549 "num_base_bdevs_discovered": 2, 00:14:55.549 "num_base_bdevs_operational": 3, 00:14:55.549 "base_bdevs_list": [ 00:14:55.549 { 00:14:55.549 "name": "BaseBdev1", 00:14:55.549 "uuid": "74076221-8de5-4fb1-a571-d7fbeca960be", 00:14:55.549 "is_configured": true, 00:14:55.549 "data_offset": 0, 00:14:55.549 "data_size": 65536 00:14:55.549 }, 00:14:55.549 { 00:14:55.549 "name": null, 00:14:55.549 "uuid": "16b57f3e-6f97-408a-adf4-c0b45ae81025", 00:14:55.549 "is_configured": false, 00:14:55.549 "data_offset": 0, 00:14:55.549 "data_size": 65536 00:14:55.549 }, 00:14:55.549 { 00:14:55.549 "name": "BaseBdev3", 00:14:55.549 "uuid": "dd2057fc-7bdc-4b27-9d4f-df8e35ab0830", 00:14:55.549 "is_configured": true, 00:14:55.549 "data_offset": 0, 00:14:55.549 "data_size": 65536 00:14:55.549 } 00:14:55.549 ] 00:14:55.549 }' 00:14:55.549 14:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.549 14:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.808 14:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.808 14:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.808 14:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.808 14:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:55.808 14:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.808 14:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:55.808 14:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:55.808 14:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.808 14:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.808 [2024-12-09 14:47:33.898809] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:56.068 14:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.068 14:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:56.068 14:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.068 14:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.068 14:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.068 14:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.068 14:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:56.068 14:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.068 14:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.068 14:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.068 14:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.068 14:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.068 14:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.068 14:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.068 14:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.068 14:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.068 14:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.068 "name": "Existed_Raid", 00:14:56.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.068 "strip_size_kb": 64, 00:14:56.068 "state": "configuring", 00:14:56.068 "raid_level": "raid5f", 00:14:56.068 "superblock": false, 00:14:56.068 "num_base_bdevs": 3, 00:14:56.068 "num_base_bdevs_discovered": 1, 00:14:56.068 "num_base_bdevs_operational": 3, 00:14:56.068 "base_bdevs_list": [ 00:14:56.068 { 00:14:56.068 "name": null, 00:14:56.068 "uuid": "74076221-8de5-4fb1-a571-d7fbeca960be", 00:14:56.068 "is_configured": false, 00:14:56.068 "data_offset": 0, 00:14:56.068 "data_size": 65536 00:14:56.068 }, 00:14:56.068 { 00:14:56.068 "name": null, 00:14:56.068 "uuid": "16b57f3e-6f97-408a-adf4-c0b45ae81025", 00:14:56.068 "is_configured": false, 00:14:56.068 "data_offset": 0, 00:14:56.068 "data_size": 65536 00:14:56.068 }, 00:14:56.068 { 00:14:56.068 "name": "BaseBdev3", 00:14:56.068 "uuid": "dd2057fc-7bdc-4b27-9d4f-df8e35ab0830", 00:14:56.068 "is_configured": true, 00:14:56.068 "data_offset": 0, 00:14:56.068 "data_size": 65536 00:14:56.068 } 00:14:56.068 ] 00:14:56.068 }' 00:14:56.068 14:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.068 14:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.328 14:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.328 14:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.328 14:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.328 14:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:56.328 14:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.328 14:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:56.328 14:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:56.328 14:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.328 14:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.328 [2024-12-09 14:47:34.417142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:56.328 14:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.328 14:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:56.328 14:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.328 14:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.328 14:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.328 14:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.328 14:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:56.328 14:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.328 14:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.328 14:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.328 14:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.328 14:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.328 14:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.328 14:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.328 14:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.328 14:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.587 14:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.587 "name": "Existed_Raid", 00:14:56.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.587 "strip_size_kb": 64, 00:14:56.587 "state": "configuring", 00:14:56.587 "raid_level": "raid5f", 00:14:56.587 "superblock": false, 00:14:56.587 "num_base_bdevs": 3, 00:14:56.587 "num_base_bdevs_discovered": 2, 00:14:56.587 "num_base_bdevs_operational": 3, 00:14:56.587 "base_bdevs_list": [ 00:14:56.587 { 00:14:56.587 "name": null, 00:14:56.587 "uuid": "74076221-8de5-4fb1-a571-d7fbeca960be", 00:14:56.587 "is_configured": false, 00:14:56.587 "data_offset": 0, 00:14:56.587 "data_size": 65536 00:14:56.587 }, 00:14:56.587 { 00:14:56.587 "name": "BaseBdev2", 00:14:56.587 "uuid": "16b57f3e-6f97-408a-adf4-c0b45ae81025", 00:14:56.587 "is_configured": true, 00:14:56.587 "data_offset": 0, 00:14:56.587 "data_size": 65536 00:14:56.587 }, 00:14:56.587 { 00:14:56.587 "name": "BaseBdev3", 00:14:56.587 "uuid": "dd2057fc-7bdc-4b27-9d4f-df8e35ab0830", 00:14:56.587 "is_configured": true, 00:14:56.587 "data_offset": 0, 00:14:56.587 "data_size": 65536 00:14:56.587 } 00:14:56.587 ] 00:14:56.587 }' 00:14:56.587 14:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.587 14:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.849 14:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:56.849 14:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.849 14:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.849 14:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.849 14:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.849 14:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:56.849 14:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.849 14:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.849 14:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:56.849 14:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.849 14:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.849 14:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 74076221-8de5-4fb1-a571-d7fbeca960be 00:14:56.849 14:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.849 14:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.849 [2024-12-09 14:47:34.959238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:56.849 [2024-12-09 14:47:34.959397] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:56.849 [2024-12-09 14:47:34.959430] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:56.849 [2024-12-09 14:47:34.959764] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:56.849 NewBaseBdev 00:14:56.849 [2024-12-09 14:47:34.965867] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:56.849 [2024-12-09 14:47:34.965891] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:56.849 [2024-12-09 14:47:34.966190] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.148 14:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.148 14:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:57.148 14:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:57.148 14:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:57.148 14:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:57.148 14:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:57.148 14:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:57.148 14:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:57.148 14:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.148 14:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.148 14:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.148 14:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:57.148 14:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.148 14:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.148 [ 00:14:57.148 { 00:14:57.148 "name": "NewBaseBdev", 00:14:57.148 "aliases": [ 00:14:57.148 "74076221-8de5-4fb1-a571-d7fbeca960be" 00:14:57.148 ], 00:14:57.149 "product_name": "Malloc disk", 00:14:57.149 "block_size": 512, 00:14:57.149 "num_blocks": 65536, 00:14:57.149 "uuid": "74076221-8de5-4fb1-a571-d7fbeca960be", 00:14:57.149 "assigned_rate_limits": { 00:14:57.149 "rw_ios_per_sec": 0, 00:14:57.149 "rw_mbytes_per_sec": 0, 00:14:57.149 "r_mbytes_per_sec": 0, 00:14:57.149 "w_mbytes_per_sec": 0 00:14:57.149 }, 00:14:57.149 "claimed": true, 00:14:57.149 "claim_type": "exclusive_write", 00:14:57.149 "zoned": false, 00:14:57.149 "supported_io_types": { 00:14:57.149 "read": true, 00:14:57.149 "write": true, 00:14:57.149 "unmap": true, 00:14:57.149 "flush": true, 00:14:57.149 "reset": true, 00:14:57.149 "nvme_admin": false, 00:14:57.149 "nvme_io": false, 00:14:57.149 "nvme_io_md": false, 00:14:57.149 "write_zeroes": true, 00:14:57.149 "zcopy": true, 00:14:57.149 "get_zone_info": false, 00:14:57.149 "zone_management": false, 00:14:57.149 "zone_append": false, 00:14:57.149 "compare": false, 00:14:57.149 "compare_and_write": false, 00:14:57.149 "abort": true, 00:14:57.149 "seek_hole": false, 00:14:57.149 "seek_data": false, 00:14:57.149 "copy": true, 00:14:57.149 "nvme_iov_md": false 00:14:57.149 }, 00:14:57.149 "memory_domains": [ 00:14:57.149 { 00:14:57.149 "dma_device_id": "system", 00:14:57.149 "dma_device_type": 1 00:14:57.149 }, 00:14:57.149 { 00:14:57.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.149 "dma_device_type": 2 00:14:57.149 } 00:14:57.149 ], 00:14:57.149 "driver_specific": {} 00:14:57.149 } 00:14:57.149 ] 00:14:57.149 14:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.149 14:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:57.149 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:57.149 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.149 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.149 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.149 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.149 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:57.149 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.149 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.149 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.149 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.149 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.149 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.149 14:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.149 14:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.149 14:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.149 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.149 "name": "Existed_Raid", 00:14:57.149 "uuid": "65018f6e-02ab-42e2-984e-a809734827f6", 00:14:57.149 "strip_size_kb": 64, 00:14:57.149 "state": "online", 00:14:57.149 "raid_level": "raid5f", 00:14:57.149 "superblock": false, 00:14:57.149 "num_base_bdevs": 3, 00:14:57.149 "num_base_bdevs_discovered": 3, 00:14:57.149 "num_base_bdevs_operational": 3, 00:14:57.149 "base_bdevs_list": [ 00:14:57.149 { 00:14:57.149 "name": "NewBaseBdev", 00:14:57.149 "uuid": "74076221-8de5-4fb1-a571-d7fbeca960be", 00:14:57.149 "is_configured": true, 00:14:57.149 "data_offset": 0, 00:14:57.149 "data_size": 65536 00:14:57.149 }, 00:14:57.149 { 00:14:57.149 "name": "BaseBdev2", 00:14:57.149 "uuid": "16b57f3e-6f97-408a-adf4-c0b45ae81025", 00:14:57.149 "is_configured": true, 00:14:57.149 "data_offset": 0, 00:14:57.149 "data_size": 65536 00:14:57.149 }, 00:14:57.149 { 00:14:57.149 "name": "BaseBdev3", 00:14:57.149 "uuid": "dd2057fc-7bdc-4b27-9d4f-df8e35ab0830", 00:14:57.149 "is_configured": true, 00:14:57.149 "data_offset": 0, 00:14:57.149 "data_size": 65536 00:14:57.149 } 00:14:57.149 ] 00:14:57.149 }' 00:14:57.149 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.149 14:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.410 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:57.410 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:57.410 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:57.410 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:57.410 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:57.410 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:57.410 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:57.410 14:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.410 14:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.410 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:57.410 [2024-12-09 14:47:35.472890] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:57.410 14:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.410 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:57.410 "name": "Existed_Raid", 00:14:57.410 "aliases": [ 00:14:57.410 "65018f6e-02ab-42e2-984e-a809734827f6" 00:14:57.410 ], 00:14:57.410 "product_name": "Raid Volume", 00:14:57.410 "block_size": 512, 00:14:57.410 "num_blocks": 131072, 00:14:57.410 "uuid": "65018f6e-02ab-42e2-984e-a809734827f6", 00:14:57.410 "assigned_rate_limits": { 00:14:57.410 "rw_ios_per_sec": 0, 00:14:57.410 "rw_mbytes_per_sec": 0, 00:14:57.410 "r_mbytes_per_sec": 0, 00:14:57.410 "w_mbytes_per_sec": 0 00:14:57.410 }, 00:14:57.410 "claimed": false, 00:14:57.410 "zoned": false, 00:14:57.410 "supported_io_types": { 00:14:57.410 "read": true, 00:14:57.410 "write": true, 00:14:57.410 "unmap": false, 00:14:57.410 "flush": false, 00:14:57.410 "reset": true, 00:14:57.410 "nvme_admin": false, 00:14:57.410 "nvme_io": false, 00:14:57.410 "nvme_io_md": false, 00:14:57.410 "write_zeroes": true, 00:14:57.410 "zcopy": false, 00:14:57.410 "get_zone_info": false, 00:14:57.410 "zone_management": false, 00:14:57.410 "zone_append": false, 00:14:57.410 "compare": false, 00:14:57.410 "compare_and_write": false, 00:14:57.410 "abort": false, 00:14:57.410 "seek_hole": false, 00:14:57.410 "seek_data": false, 00:14:57.410 "copy": false, 00:14:57.410 "nvme_iov_md": false 00:14:57.410 }, 00:14:57.410 "driver_specific": { 00:14:57.410 "raid": { 00:14:57.410 "uuid": "65018f6e-02ab-42e2-984e-a809734827f6", 00:14:57.410 "strip_size_kb": 64, 00:14:57.410 "state": "online", 00:14:57.410 "raid_level": "raid5f", 00:14:57.410 "superblock": false, 00:14:57.410 "num_base_bdevs": 3, 00:14:57.410 "num_base_bdevs_discovered": 3, 00:14:57.410 "num_base_bdevs_operational": 3, 00:14:57.410 "base_bdevs_list": [ 00:14:57.410 { 00:14:57.411 "name": "NewBaseBdev", 00:14:57.411 "uuid": "74076221-8de5-4fb1-a571-d7fbeca960be", 00:14:57.411 "is_configured": true, 00:14:57.411 "data_offset": 0, 00:14:57.411 "data_size": 65536 00:14:57.411 }, 00:14:57.411 { 00:14:57.411 "name": "BaseBdev2", 00:14:57.411 "uuid": "16b57f3e-6f97-408a-adf4-c0b45ae81025", 00:14:57.411 "is_configured": true, 00:14:57.411 "data_offset": 0, 00:14:57.411 "data_size": 65536 00:14:57.411 }, 00:14:57.411 { 00:14:57.411 "name": "BaseBdev3", 00:14:57.411 "uuid": "dd2057fc-7bdc-4b27-9d4f-df8e35ab0830", 00:14:57.411 "is_configured": true, 00:14:57.411 "data_offset": 0, 00:14:57.411 "data_size": 65536 00:14:57.411 } 00:14:57.411 ] 00:14:57.411 } 00:14:57.411 } 00:14:57.411 }' 00:14:57.411 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:57.670 BaseBdev2 00:14:57.670 BaseBdev3' 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.670 [2024-12-09 14:47:35.752133] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:57.670 [2024-12-09 14:47:35.752212] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:57.670 [2024-12-09 14:47:35.752337] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:57.670 [2024-12-09 14:47:35.752681] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:57.670 [2024-12-09 14:47:35.752741] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 81207 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 81207 ']' 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 81207 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81207 00:14:57.670 killing process with pid 81207 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81207' 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 81207 00:14:57.670 [2024-12-09 14:47:35.787381] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:57.670 14:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 81207 00:14:58.238 [2024-12-09 14:47:36.086335] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:59.178 14:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:59.178 00:14:59.178 real 0m10.998s 00:14:59.178 user 0m17.511s 00:14:59.178 sys 0m1.980s 00:14:59.178 14:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:59.178 ************************************ 00:14:59.178 END TEST raid5f_state_function_test 00:14:59.178 ************************************ 00:14:59.178 14:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.178 14:47:37 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:14:59.178 14:47:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:59.178 14:47:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:59.178 14:47:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:59.437 ************************************ 00:14:59.437 START TEST raid5f_state_function_test_sb 00:14:59.437 ************************************ 00:14:59.437 14:47:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:14:59.437 14:47:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:59.437 14:47:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:59.437 14:47:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:59.437 14:47:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:59.437 14:47:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:59.437 14:47:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:59.437 14:47:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:59.437 14:47:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:59.437 14:47:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:59.437 14:47:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:59.437 14:47:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:59.437 14:47:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:59.437 14:47:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:59.437 14:47:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:59.437 14:47:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:59.437 14:47:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:59.437 14:47:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:59.437 14:47:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:59.437 14:47:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:59.437 14:47:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:59.437 14:47:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:59.437 14:47:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:59.437 14:47:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:59.437 14:47:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:59.437 14:47:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:59.437 14:47:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:59.437 14:47:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81827 00:14:59.437 14:47:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:59.437 14:47:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81827' 00:14:59.437 Process raid pid: 81827 00:14:59.437 14:47:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81827 00:14:59.437 14:47:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 81827 ']' 00:14:59.437 14:47:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.437 14:47:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:59.437 14:47:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.437 14:47:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:59.437 14:47:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.437 [2024-12-09 14:47:37.406730] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:14:59.437 [2024-12-09 14:47:37.406912] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.696 [2024-12-09 14:47:37.583403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.696 [2024-12-09 14:47:37.702090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.955 [2024-12-09 14:47:37.902792] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:59.955 [2024-12-09 14:47:37.902841] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:00.214 14:47:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:00.214 14:47:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:00.214 14:47:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:00.214 14:47:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.214 14:47:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.214 [2024-12-09 14:47:38.262455] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:00.214 [2024-12-09 14:47:38.262582] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:00.214 [2024-12-09 14:47:38.262622] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:00.214 [2024-12-09 14:47:38.262649] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:00.214 [2024-12-09 14:47:38.262669] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:00.214 [2024-12-09 14:47:38.262690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:00.214 14:47:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.214 14:47:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:00.214 14:47:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.214 14:47:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.214 14:47:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.214 14:47:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.214 14:47:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.214 14:47:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.214 14:47:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.214 14:47:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.214 14:47:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.214 14:47:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.214 14:47:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.214 14:47:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.214 14:47:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.214 14:47:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.214 14:47:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.214 "name": "Existed_Raid", 00:15:00.214 "uuid": "af6f9a19-649d-4e56-913b-1ff98a6aa1d0", 00:15:00.214 "strip_size_kb": 64, 00:15:00.214 "state": "configuring", 00:15:00.214 "raid_level": "raid5f", 00:15:00.214 "superblock": true, 00:15:00.214 "num_base_bdevs": 3, 00:15:00.214 "num_base_bdevs_discovered": 0, 00:15:00.214 "num_base_bdevs_operational": 3, 00:15:00.214 "base_bdevs_list": [ 00:15:00.214 { 00:15:00.214 "name": "BaseBdev1", 00:15:00.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.214 "is_configured": false, 00:15:00.214 "data_offset": 0, 00:15:00.214 "data_size": 0 00:15:00.214 }, 00:15:00.214 { 00:15:00.214 "name": "BaseBdev2", 00:15:00.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.214 "is_configured": false, 00:15:00.214 "data_offset": 0, 00:15:00.214 "data_size": 0 00:15:00.214 }, 00:15:00.214 { 00:15:00.214 "name": "BaseBdev3", 00:15:00.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.214 "is_configured": false, 00:15:00.214 "data_offset": 0, 00:15:00.214 "data_size": 0 00:15:00.214 } 00:15:00.214 ] 00:15:00.214 }' 00:15:00.214 14:47:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.214 14:47:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.783 14:47:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:00.783 14:47:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.783 14:47:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.783 [2024-12-09 14:47:38.741540] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:00.783 [2024-12-09 14:47:38.741641] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:00.783 14:47:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.783 14:47:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:00.783 14:47:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.783 14:47:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.783 [2024-12-09 14:47:38.753540] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:00.783 [2024-12-09 14:47:38.753658] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:00.783 [2024-12-09 14:47:38.753695] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:00.783 [2024-12-09 14:47:38.753723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:00.783 [2024-12-09 14:47:38.753744] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:00.783 [2024-12-09 14:47:38.753774] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:00.783 14:47:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.783 14:47:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:00.783 14:47:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.783 14:47:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.783 [2024-12-09 14:47:38.801986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:00.783 BaseBdev1 00:15:00.783 14:47:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.783 14:47:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:00.783 14:47:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:00.783 14:47:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:00.784 14:47:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:00.784 14:47:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:00.784 14:47:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:00.784 14:47:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:00.784 14:47:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.784 14:47:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.784 14:47:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.784 14:47:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:00.784 14:47:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.784 14:47:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.784 [ 00:15:00.784 { 00:15:00.784 "name": "BaseBdev1", 00:15:00.784 "aliases": [ 00:15:00.784 "0e9320f1-9846-4b84-9ae3-601d7413f64d" 00:15:00.784 ], 00:15:00.784 "product_name": "Malloc disk", 00:15:00.784 "block_size": 512, 00:15:00.784 "num_blocks": 65536, 00:15:00.784 "uuid": "0e9320f1-9846-4b84-9ae3-601d7413f64d", 00:15:00.784 "assigned_rate_limits": { 00:15:00.784 "rw_ios_per_sec": 0, 00:15:00.784 "rw_mbytes_per_sec": 0, 00:15:00.784 "r_mbytes_per_sec": 0, 00:15:00.784 "w_mbytes_per_sec": 0 00:15:00.784 }, 00:15:00.784 "claimed": true, 00:15:00.784 "claim_type": "exclusive_write", 00:15:00.784 "zoned": false, 00:15:00.784 "supported_io_types": { 00:15:00.784 "read": true, 00:15:00.784 "write": true, 00:15:00.784 "unmap": true, 00:15:00.784 "flush": true, 00:15:00.784 "reset": true, 00:15:00.784 "nvme_admin": false, 00:15:00.784 "nvme_io": false, 00:15:00.784 "nvme_io_md": false, 00:15:00.784 "write_zeroes": true, 00:15:00.784 "zcopy": true, 00:15:00.784 "get_zone_info": false, 00:15:00.784 "zone_management": false, 00:15:00.784 "zone_append": false, 00:15:00.784 "compare": false, 00:15:00.784 "compare_and_write": false, 00:15:00.784 "abort": true, 00:15:00.784 "seek_hole": false, 00:15:00.784 "seek_data": false, 00:15:00.784 "copy": true, 00:15:00.784 "nvme_iov_md": false 00:15:00.784 }, 00:15:00.784 "memory_domains": [ 00:15:00.784 { 00:15:00.784 "dma_device_id": "system", 00:15:00.784 "dma_device_type": 1 00:15:00.784 }, 00:15:00.784 { 00:15:00.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.784 "dma_device_type": 2 00:15:00.784 } 00:15:00.784 ], 00:15:00.784 "driver_specific": {} 00:15:00.784 } 00:15:00.784 ] 00:15:00.784 14:47:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.784 14:47:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:00.784 14:47:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:00.784 14:47:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.784 14:47:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.784 14:47:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.784 14:47:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.784 14:47:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.784 14:47:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.784 14:47:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.784 14:47:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.784 14:47:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.784 14:47:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.784 14:47:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.784 14:47:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.784 14:47:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.784 14:47:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.784 14:47:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.784 "name": "Existed_Raid", 00:15:00.784 "uuid": "baf70d74-0b6c-4499-891e-7f4a6816520a", 00:15:00.784 "strip_size_kb": 64, 00:15:00.784 "state": "configuring", 00:15:00.784 "raid_level": "raid5f", 00:15:00.784 "superblock": true, 00:15:00.784 "num_base_bdevs": 3, 00:15:00.784 "num_base_bdevs_discovered": 1, 00:15:00.784 "num_base_bdevs_operational": 3, 00:15:00.784 "base_bdevs_list": [ 00:15:00.784 { 00:15:00.784 "name": "BaseBdev1", 00:15:00.784 "uuid": "0e9320f1-9846-4b84-9ae3-601d7413f64d", 00:15:00.784 "is_configured": true, 00:15:00.784 "data_offset": 2048, 00:15:00.784 "data_size": 63488 00:15:00.784 }, 00:15:00.784 { 00:15:00.784 "name": "BaseBdev2", 00:15:00.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.784 "is_configured": false, 00:15:00.784 "data_offset": 0, 00:15:00.784 "data_size": 0 00:15:00.784 }, 00:15:00.784 { 00:15:00.784 "name": "BaseBdev3", 00:15:00.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.784 "is_configured": false, 00:15:00.784 "data_offset": 0, 00:15:00.784 "data_size": 0 00:15:00.784 } 00:15:00.784 ] 00:15:00.784 }' 00:15:00.784 14:47:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.784 14:47:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.352 14:47:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:01.352 14:47:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.352 14:47:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.352 [2024-12-09 14:47:39.313185] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:01.352 [2024-12-09 14:47:39.313295] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:01.352 14:47:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.352 14:47:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:01.352 14:47:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.352 14:47:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.352 [2024-12-09 14:47:39.325220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:01.352 [2024-12-09 14:47:39.327348] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:01.352 [2024-12-09 14:47:39.327432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:01.352 [2024-12-09 14:47:39.327464] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:01.352 [2024-12-09 14:47:39.327491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:01.352 14:47:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.352 14:47:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:01.352 14:47:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:01.352 14:47:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:01.352 14:47:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.352 14:47:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.352 14:47:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:01.352 14:47:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.352 14:47:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:01.352 14:47:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.352 14:47:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.352 14:47:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.352 14:47:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.352 14:47:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.352 14:47:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.352 14:47:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.352 14:47:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.352 14:47:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.352 14:47:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.352 "name": "Existed_Raid", 00:15:01.352 "uuid": "a0e16a2f-7c2d-40d5-bc64-af5c24587361", 00:15:01.352 "strip_size_kb": 64, 00:15:01.352 "state": "configuring", 00:15:01.352 "raid_level": "raid5f", 00:15:01.352 "superblock": true, 00:15:01.352 "num_base_bdevs": 3, 00:15:01.352 "num_base_bdevs_discovered": 1, 00:15:01.352 "num_base_bdevs_operational": 3, 00:15:01.352 "base_bdevs_list": [ 00:15:01.352 { 00:15:01.352 "name": "BaseBdev1", 00:15:01.352 "uuid": "0e9320f1-9846-4b84-9ae3-601d7413f64d", 00:15:01.352 "is_configured": true, 00:15:01.352 "data_offset": 2048, 00:15:01.352 "data_size": 63488 00:15:01.352 }, 00:15:01.352 { 00:15:01.352 "name": "BaseBdev2", 00:15:01.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.352 "is_configured": false, 00:15:01.352 "data_offset": 0, 00:15:01.352 "data_size": 0 00:15:01.352 }, 00:15:01.352 { 00:15:01.352 "name": "BaseBdev3", 00:15:01.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.352 "is_configured": false, 00:15:01.352 "data_offset": 0, 00:15:01.352 "data_size": 0 00:15:01.352 } 00:15:01.352 ] 00:15:01.352 }' 00:15:01.352 14:47:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.352 14:47:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.922 14:47:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:01.922 14:47:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.922 14:47:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.922 [2024-12-09 14:47:39.788641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:01.922 BaseBdev2 00:15:01.922 14:47:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.922 14:47:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:01.922 14:47:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:01.922 14:47:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:01.922 14:47:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:01.922 14:47:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:01.922 14:47:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:01.922 14:47:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:01.922 14:47:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.922 14:47:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.922 14:47:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.922 14:47:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:01.922 14:47:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.922 14:47:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.922 [ 00:15:01.922 { 00:15:01.922 "name": "BaseBdev2", 00:15:01.922 "aliases": [ 00:15:01.922 "6c3c8b61-a063-439f-b17f-7a99fb1f1c8a" 00:15:01.922 ], 00:15:01.922 "product_name": "Malloc disk", 00:15:01.922 "block_size": 512, 00:15:01.922 "num_blocks": 65536, 00:15:01.922 "uuid": "6c3c8b61-a063-439f-b17f-7a99fb1f1c8a", 00:15:01.922 "assigned_rate_limits": { 00:15:01.922 "rw_ios_per_sec": 0, 00:15:01.922 "rw_mbytes_per_sec": 0, 00:15:01.922 "r_mbytes_per_sec": 0, 00:15:01.922 "w_mbytes_per_sec": 0 00:15:01.922 }, 00:15:01.922 "claimed": true, 00:15:01.922 "claim_type": "exclusive_write", 00:15:01.922 "zoned": false, 00:15:01.922 "supported_io_types": { 00:15:01.922 "read": true, 00:15:01.922 "write": true, 00:15:01.922 "unmap": true, 00:15:01.922 "flush": true, 00:15:01.922 "reset": true, 00:15:01.922 "nvme_admin": false, 00:15:01.922 "nvme_io": false, 00:15:01.922 "nvme_io_md": false, 00:15:01.922 "write_zeroes": true, 00:15:01.922 "zcopy": true, 00:15:01.922 "get_zone_info": false, 00:15:01.922 "zone_management": false, 00:15:01.922 "zone_append": false, 00:15:01.922 "compare": false, 00:15:01.922 "compare_and_write": false, 00:15:01.922 "abort": true, 00:15:01.922 "seek_hole": false, 00:15:01.922 "seek_data": false, 00:15:01.922 "copy": true, 00:15:01.922 "nvme_iov_md": false 00:15:01.922 }, 00:15:01.922 "memory_domains": [ 00:15:01.922 { 00:15:01.922 "dma_device_id": "system", 00:15:01.922 "dma_device_type": 1 00:15:01.922 }, 00:15:01.922 { 00:15:01.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.922 "dma_device_type": 2 00:15:01.922 } 00:15:01.922 ], 00:15:01.922 "driver_specific": {} 00:15:01.922 } 00:15:01.922 ] 00:15:01.922 14:47:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.922 14:47:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:01.922 14:47:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:01.923 14:47:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:01.923 14:47:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:01.923 14:47:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.923 14:47:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.923 14:47:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:01.923 14:47:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.923 14:47:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:01.923 14:47:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.923 14:47:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.923 14:47:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.923 14:47:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.923 14:47:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.923 14:47:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.923 14:47:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.923 14:47:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.923 14:47:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.923 14:47:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.923 "name": "Existed_Raid", 00:15:01.923 "uuid": "a0e16a2f-7c2d-40d5-bc64-af5c24587361", 00:15:01.923 "strip_size_kb": 64, 00:15:01.923 "state": "configuring", 00:15:01.923 "raid_level": "raid5f", 00:15:01.923 "superblock": true, 00:15:01.923 "num_base_bdevs": 3, 00:15:01.923 "num_base_bdevs_discovered": 2, 00:15:01.923 "num_base_bdevs_operational": 3, 00:15:01.923 "base_bdevs_list": [ 00:15:01.923 { 00:15:01.923 "name": "BaseBdev1", 00:15:01.923 "uuid": "0e9320f1-9846-4b84-9ae3-601d7413f64d", 00:15:01.923 "is_configured": true, 00:15:01.923 "data_offset": 2048, 00:15:01.923 "data_size": 63488 00:15:01.923 }, 00:15:01.923 { 00:15:01.923 "name": "BaseBdev2", 00:15:01.923 "uuid": "6c3c8b61-a063-439f-b17f-7a99fb1f1c8a", 00:15:01.923 "is_configured": true, 00:15:01.923 "data_offset": 2048, 00:15:01.923 "data_size": 63488 00:15:01.923 }, 00:15:01.923 { 00:15:01.923 "name": "BaseBdev3", 00:15:01.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.923 "is_configured": false, 00:15:01.923 "data_offset": 0, 00:15:01.923 "data_size": 0 00:15:01.923 } 00:15:01.923 ] 00:15:01.923 }' 00:15:01.923 14:47:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.923 14:47:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.491 14:47:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:02.491 14:47:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.491 14:47:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.491 [2024-12-09 14:47:40.373312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:02.491 [2024-12-09 14:47:40.373705] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:02.491 [2024-12-09 14:47:40.373769] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:02.491 [2024-12-09 14:47:40.374061] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:02.491 BaseBdev3 00:15:02.491 14:47:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.491 14:47:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:02.491 14:47:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:02.491 14:47:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:02.491 14:47:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:02.491 14:47:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:02.491 14:47:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:02.491 14:47:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:02.491 14:47:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.491 14:47:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.491 [2024-12-09 14:47:40.380025] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:02.491 [2024-12-09 14:47:40.380088] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:02.491 [2024-12-09 14:47:40.380325] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.491 14:47:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.491 14:47:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:02.491 14:47:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.491 14:47:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.491 [ 00:15:02.491 { 00:15:02.491 "name": "BaseBdev3", 00:15:02.491 "aliases": [ 00:15:02.491 "7db5a6a5-aa0b-4754-9b1b-5ac3afaf6b4f" 00:15:02.491 ], 00:15:02.491 "product_name": "Malloc disk", 00:15:02.491 "block_size": 512, 00:15:02.491 "num_blocks": 65536, 00:15:02.491 "uuid": "7db5a6a5-aa0b-4754-9b1b-5ac3afaf6b4f", 00:15:02.491 "assigned_rate_limits": { 00:15:02.491 "rw_ios_per_sec": 0, 00:15:02.491 "rw_mbytes_per_sec": 0, 00:15:02.491 "r_mbytes_per_sec": 0, 00:15:02.491 "w_mbytes_per_sec": 0 00:15:02.492 }, 00:15:02.492 "claimed": true, 00:15:02.492 "claim_type": "exclusive_write", 00:15:02.492 "zoned": false, 00:15:02.492 "supported_io_types": { 00:15:02.492 "read": true, 00:15:02.492 "write": true, 00:15:02.492 "unmap": true, 00:15:02.492 "flush": true, 00:15:02.492 "reset": true, 00:15:02.492 "nvme_admin": false, 00:15:02.492 "nvme_io": false, 00:15:02.492 "nvme_io_md": false, 00:15:02.492 "write_zeroes": true, 00:15:02.492 "zcopy": true, 00:15:02.492 "get_zone_info": false, 00:15:02.492 "zone_management": false, 00:15:02.492 "zone_append": false, 00:15:02.492 "compare": false, 00:15:02.492 "compare_and_write": false, 00:15:02.492 "abort": true, 00:15:02.492 "seek_hole": false, 00:15:02.492 "seek_data": false, 00:15:02.492 "copy": true, 00:15:02.492 "nvme_iov_md": false 00:15:02.492 }, 00:15:02.492 "memory_domains": [ 00:15:02.492 { 00:15:02.492 "dma_device_id": "system", 00:15:02.492 "dma_device_type": 1 00:15:02.492 }, 00:15:02.492 { 00:15:02.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.492 "dma_device_type": 2 00:15:02.492 } 00:15:02.492 ], 00:15:02.492 "driver_specific": {} 00:15:02.492 } 00:15:02.492 ] 00:15:02.492 14:47:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.492 14:47:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:02.492 14:47:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:02.492 14:47:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:02.492 14:47:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:02.492 14:47:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.492 14:47:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.492 14:47:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:02.492 14:47:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.492 14:47:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:02.492 14:47:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.492 14:47:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.492 14:47:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.492 14:47:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.492 14:47:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.492 14:47:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.492 14:47:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.492 14:47:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.492 14:47:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.492 14:47:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.492 "name": "Existed_Raid", 00:15:02.492 "uuid": "a0e16a2f-7c2d-40d5-bc64-af5c24587361", 00:15:02.492 "strip_size_kb": 64, 00:15:02.492 "state": "online", 00:15:02.492 "raid_level": "raid5f", 00:15:02.492 "superblock": true, 00:15:02.492 "num_base_bdevs": 3, 00:15:02.492 "num_base_bdevs_discovered": 3, 00:15:02.492 "num_base_bdevs_operational": 3, 00:15:02.492 "base_bdevs_list": [ 00:15:02.492 { 00:15:02.492 "name": "BaseBdev1", 00:15:02.492 "uuid": "0e9320f1-9846-4b84-9ae3-601d7413f64d", 00:15:02.492 "is_configured": true, 00:15:02.492 "data_offset": 2048, 00:15:02.492 "data_size": 63488 00:15:02.492 }, 00:15:02.492 { 00:15:02.492 "name": "BaseBdev2", 00:15:02.492 "uuid": "6c3c8b61-a063-439f-b17f-7a99fb1f1c8a", 00:15:02.492 "is_configured": true, 00:15:02.492 "data_offset": 2048, 00:15:02.492 "data_size": 63488 00:15:02.492 }, 00:15:02.492 { 00:15:02.492 "name": "BaseBdev3", 00:15:02.492 "uuid": "7db5a6a5-aa0b-4754-9b1b-5ac3afaf6b4f", 00:15:02.492 "is_configured": true, 00:15:02.492 "data_offset": 2048, 00:15:02.492 "data_size": 63488 00:15:02.492 } 00:15:02.492 ] 00:15:02.492 }' 00:15:02.492 14:47:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.492 14:47:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.070 14:47:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:03.070 14:47:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:03.070 14:47:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:03.070 14:47:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:03.070 14:47:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:03.070 14:47:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:03.070 14:47:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:03.070 14:47:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:03.070 14:47:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.070 14:47:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.070 [2024-12-09 14:47:40.894191] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:03.070 14:47:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.070 14:47:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:03.070 "name": "Existed_Raid", 00:15:03.071 "aliases": [ 00:15:03.071 "a0e16a2f-7c2d-40d5-bc64-af5c24587361" 00:15:03.071 ], 00:15:03.071 "product_name": "Raid Volume", 00:15:03.071 "block_size": 512, 00:15:03.071 "num_blocks": 126976, 00:15:03.071 "uuid": "a0e16a2f-7c2d-40d5-bc64-af5c24587361", 00:15:03.071 "assigned_rate_limits": { 00:15:03.071 "rw_ios_per_sec": 0, 00:15:03.071 "rw_mbytes_per_sec": 0, 00:15:03.071 "r_mbytes_per_sec": 0, 00:15:03.071 "w_mbytes_per_sec": 0 00:15:03.071 }, 00:15:03.071 "claimed": false, 00:15:03.071 "zoned": false, 00:15:03.071 "supported_io_types": { 00:15:03.071 "read": true, 00:15:03.071 "write": true, 00:15:03.071 "unmap": false, 00:15:03.071 "flush": false, 00:15:03.071 "reset": true, 00:15:03.071 "nvme_admin": false, 00:15:03.071 "nvme_io": false, 00:15:03.071 "nvme_io_md": false, 00:15:03.071 "write_zeroes": true, 00:15:03.071 "zcopy": false, 00:15:03.071 "get_zone_info": false, 00:15:03.071 "zone_management": false, 00:15:03.071 "zone_append": false, 00:15:03.071 "compare": false, 00:15:03.071 "compare_and_write": false, 00:15:03.071 "abort": false, 00:15:03.071 "seek_hole": false, 00:15:03.071 "seek_data": false, 00:15:03.071 "copy": false, 00:15:03.071 "nvme_iov_md": false 00:15:03.071 }, 00:15:03.071 "driver_specific": { 00:15:03.071 "raid": { 00:15:03.071 "uuid": "a0e16a2f-7c2d-40d5-bc64-af5c24587361", 00:15:03.071 "strip_size_kb": 64, 00:15:03.071 "state": "online", 00:15:03.071 "raid_level": "raid5f", 00:15:03.071 "superblock": true, 00:15:03.071 "num_base_bdevs": 3, 00:15:03.071 "num_base_bdevs_discovered": 3, 00:15:03.071 "num_base_bdevs_operational": 3, 00:15:03.071 "base_bdevs_list": [ 00:15:03.071 { 00:15:03.071 "name": "BaseBdev1", 00:15:03.071 "uuid": "0e9320f1-9846-4b84-9ae3-601d7413f64d", 00:15:03.071 "is_configured": true, 00:15:03.071 "data_offset": 2048, 00:15:03.071 "data_size": 63488 00:15:03.071 }, 00:15:03.071 { 00:15:03.071 "name": "BaseBdev2", 00:15:03.071 "uuid": "6c3c8b61-a063-439f-b17f-7a99fb1f1c8a", 00:15:03.071 "is_configured": true, 00:15:03.071 "data_offset": 2048, 00:15:03.071 "data_size": 63488 00:15:03.071 }, 00:15:03.071 { 00:15:03.071 "name": "BaseBdev3", 00:15:03.071 "uuid": "7db5a6a5-aa0b-4754-9b1b-5ac3afaf6b4f", 00:15:03.071 "is_configured": true, 00:15:03.071 "data_offset": 2048, 00:15:03.071 "data_size": 63488 00:15:03.071 } 00:15:03.071 ] 00:15:03.071 } 00:15:03.071 } 00:15:03.071 }' 00:15:03.071 14:47:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:03.071 14:47:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:03.071 BaseBdev2 00:15:03.071 BaseBdev3' 00:15:03.071 14:47:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.071 14:47:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:03.071 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.071 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:03.071 14:47:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.071 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.071 14:47:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.071 14:47:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.071 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.071 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.071 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.071 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.071 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:03.071 14:47:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.071 14:47:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.071 14:47:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.071 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.071 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.071 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.071 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:03.071 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.071 14:47:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.071 14:47:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.071 14:47:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.071 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.071 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.071 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:03.071 14:47:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.071 14:47:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.071 [2024-12-09 14:47:41.141592] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:03.331 14:47:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.331 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:03.331 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:03.331 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:03.331 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:03.331 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:03.331 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:03.331 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:03.331 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.331 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.331 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.331 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:03.331 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.331 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.331 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.331 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.331 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.331 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.331 14:47:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.331 14:47:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.331 14:47:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.331 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.331 "name": "Existed_Raid", 00:15:03.331 "uuid": "a0e16a2f-7c2d-40d5-bc64-af5c24587361", 00:15:03.331 "strip_size_kb": 64, 00:15:03.331 "state": "online", 00:15:03.331 "raid_level": "raid5f", 00:15:03.331 "superblock": true, 00:15:03.331 "num_base_bdevs": 3, 00:15:03.331 "num_base_bdevs_discovered": 2, 00:15:03.331 "num_base_bdevs_operational": 2, 00:15:03.331 "base_bdevs_list": [ 00:15:03.331 { 00:15:03.331 "name": null, 00:15:03.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.331 "is_configured": false, 00:15:03.331 "data_offset": 0, 00:15:03.331 "data_size": 63488 00:15:03.331 }, 00:15:03.331 { 00:15:03.331 "name": "BaseBdev2", 00:15:03.331 "uuid": "6c3c8b61-a063-439f-b17f-7a99fb1f1c8a", 00:15:03.331 "is_configured": true, 00:15:03.331 "data_offset": 2048, 00:15:03.331 "data_size": 63488 00:15:03.331 }, 00:15:03.331 { 00:15:03.331 "name": "BaseBdev3", 00:15:03.331 "uuid": "7db5a6a5-aa0b-4754-9b1b-5ac3afaf6b4f", 00:15:03.331 "is_configured": true, 00:15:03.331 "data_offset": 2048, 00:15:03.331 "data_size": 63488 00:15:03.331 } 00:15:03.331 ] 00:15:03.331 }' 00:15:03.331 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.331 14:47:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.901 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:03.901 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:03.901 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.901 14:47:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.901 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:03.901 14:47:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.901 14:47:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.901 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:03.901 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:03.901 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:03.901 14:47:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.901 14:47:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.901 [2024-12-09 14:47:41.775161] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:03.901 [2024-12-09 14:47:41.775402] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:03.901 [2024-12-09 14:47:41.876996] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:03.901 14:47:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.901 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:03.901 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:03.901 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.901 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:03.901 14:47:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.901 14:47:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.901 14:47:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.901 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:03.901 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:03.901 14:47:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:03.901 14:47:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.901 14:47:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.901 [2024-12-09 14:47:41.936984] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:03.901 [2024-12-09 14:47:41.937079] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.162 BaseBdev2 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.162 [ 00:15:04.162 { 00:15:04.162 "name": "BaseBdev2", 00:15:04.162 "aliases": [ 00:15:04.162 "026794f0-7a6f-4d47-b35d-3a10486f62ba" 00:15:04.162 ], 00:15:04.162 "product_name": "Malloc disk", 00:15:04.162 "block_size": 512, 00:15:04.162 "num_blocks": 65536, 00:15:04.162 "uuid": "026794f0-7a6f-4d47-b35d-3a10486f62ba", 00:15:04.162 "assigned_rate_limits": { 00:15:04.162 "rw_ios_per_sec": 0, 00:15:04.162 "rw_mbytes_per_sec": 0, 00:15:04.162 "r_mbytes_per_sec": 0, 00:15:04.162 "w_mbytes_per_sec": 0 00:15:04.162 }, 00:15:04.162 "claimed": false, 00:15:04.162 "zoned": false, 00:15:04.162 "supported_io_types": { 00:15:04.162 "read": true, 00:15:04.162 "write": true, 00:15:04.162 "unmap": true, 00:15:04.162 "flush": true, 00:15:04.162 "reset": true, 00:15:04.162 "nvme_admin": false, 00:15:04.162 "nvme_io": false, 00:15:04.162 "nvme_io_md": false, 00:15:04.162 "write_zeroes": true, 00:15:04.162 "zcopy": true, 00:15:04.162 "get_zone_info": false, 00:15:04.162 "zone_management": false, 00:15:04.162 "zone_append": false, 00:15:04.162 "compare": false, 00:15:04.162 "compare_and_write": false, 00:15:04.162 "abort": true, 00:15:04.162 "seek_hole": false, 00:15:04.162 "seek_data": false, 00:15:04.162 "copy": true, 00:15:04.162 "nvme_iov_md": false 00:15:04.162 }, 00:15:04.162 "memory_domains": [ 00:15:04.162 { 00:15:04.162 "dma_device_id": "system", 00:15:04.162 "dma_device_type": 1 00:15:04.162 }, 00:15:04.162 { 00:15:04.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.162 "dma_device_type": 2 00:15:04.162 } 00:15:04.162 ], 00:15:04.162 "driver_specific": {} 00:15:04.162 } 00:15:04.162 ] 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.162 BaseBdev3 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:04.162 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:04.163 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:04.163 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:04.163 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:04.163 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.163 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.163 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.163 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:04.163 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.163 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.163 [ 00:15:04.163 { 00:15:04.163 "name": "BaseBdev3", 00:15:04.163 "aliases": [ 00:15:04.163 "99075526-df10-45c7-a1c5-34c3218d8bad" 00:15:04.163 ], 00:15:04.163 "product_name": "Malloc disk", 00:15:04.163 "block_size": 512, 00:15:04.163 "num_blocks": 65536, 00:15:04.163 "uuid": "99075526-df10-45c7-a1c5-34c3218d8bad", 00:15:04.163 "assigned_rate_limits": { 00:15:04.163 "rw_ios_per_sec": 0, 00:15:04.163 "rw_mbytes_per_sec": 0, 00:15:04.163 "r_mbytes_per_sec": 0, 00:15:04.163 "w_mbytes_per_sec": 0 00:15:04.163 }, 00:15:04.163 "claimed": false, 00:15:04.163 "zoned": false, 00:15:04.163 "supported_io_types": { 00:15:04.163 "read": true, 00:15:04.163 "write": true, 00:15:04.163 "unmap": true, 00:15:04.163 "flush": true, 00:15:04.163 "reset": true, 00:15:04.163 "nvme_admin": false, 00:15:04.163 "nvme_io": false, 00:15:04.163 "nvme_io_md": false, 00:15:04.163 "write_zeroes": true, 00:15:04.163 "zcopy": true, 00:15:04.163 "get_zone_info": false, 00:15:04.163 "zone_management": false, 00:15:04.163 "zone_append": false, 00:15:04.163 "compare": false, 00:15:04.163 "compare_and_write": false, 00:15:04.163 "abort": true, 00:15:04.163 "seek_hole": false, 00:15:04.163 "seek_data": false, 00:15:04.163 "copy": true, 00:15:04.163 "nvme_iov_md": false 00:15:04.163 }, 00:15:04.163 "memory_domains": [ 00:15:04.163 { 00:15:04.163 "dma_device_id": "system", 00:15:04.163 "dma_device_type": 1 00:15:04.163 }, 00:15:04.163 { 00:15:04.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.163 "dma_device_type": 2 00:15:04.163 } 00:15:04.163 ], 00:15:04.163 "driver_specific": {} 00:15:04.163 } 00:15:04.163 ] 00:15:04.163 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.163 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:04.163 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:04.163 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:04.163 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:04.163 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.163 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.163 [2024-12-09 14:47:42.263734] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:04.163 [2024-12-09 14:47:42.263848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:04.163 [2024-12-09 14:47:42.263910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:04.163 [2024-12-09 14:47:42.266121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:04.163 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.163 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:04.163 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:04.163 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:04.163 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:04.163 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.163 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:04.163 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.163 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.163 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.163 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.163 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.163 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.163 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.163 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.422 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.422 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.422 "name": "Existed_Raid", 00:15:04.422 "uuid": "00b231ec-5f96-4af1-95f0-9b5b36c61598", 00:15:04.422 "strip_size_kb": 64, 00:15:04.422 "state": "configuring", 00:15:04.422 "raid_level": "raid5f", 00:15:04.422 "superblock": true, 00:15:04.422 "num_base_bdevs": 3, 00:15:04.422 "num_base_bdevs_discovered": 2, 00:15:04.422 "num_base_bdevs_operational": 3, 00:15:04.422 "base_bdevs_list": [ 00:15:04.422 { 00:15:04.422 "name": "BaseBdev1", 00:15:04.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.422 "is_configured": false, 00:15:04.422 "data_offset": 0, 00:15:04.422 "data_size": 0 00:15:04.422 }, 00:15:04.422 { 00:15:04.422 "name": "BaseBdev2", 00:15:04.422 "uuid": "026794f0-7a6f-4d47-b35d-3a10486f62ba", 00:15:04.422 "is_configured": true, 00:15:04.422 "data_offset": 2048, 00:15:04.422 "data_size": 63488 00:15:04.422 }, 00:15:04.422 { 00:15:04.422 "name": "BaseBdev3", 00:15:04.422 "uuid": "99075526-df10-45c7-a1c5-34c3218d8bad", 00:15:04.422 "is_configured": true, 00:15:04.422 "data_offset": 2048, 00:15:04.422 "data_size": 63488 00:15:04.422 } 00:15:04.422 ] 00:15:04.422 }' 00:15:04.422 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.422 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.681 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:04.681 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.681 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.682 [2024-12-09 14:47:42.707137] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:04.682 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.682 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:04.682 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:04.682 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:04.682 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:04.682 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.682 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:04.682 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.682 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.682 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.682 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.682 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.682 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.682 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.682 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.682 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.682 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.682 "name": "Existed_Raid", 00:15:04.682 "uuid": "00b231ec-5f96-4af1-95f0-9b5b36c61598", 00:15:04.682 "strip_size_kb": 64, 00:15:04.682 "state": "configuring", 00:15:04.682 "raid_level": "raid5f", 00:15:04.682 "superblock": true, 00:15:04.682 "num_base_bdevs": 3, 00:15:04.682 "num_base_bdevs_discovered": 1, 00:15:04.682 "num_base_bdevs_operational": 3, 00:15:04.682 "base_bdevs_list": [ 00:15:04.682 { 00:15:04.682 "name": "BaseBdev1", 00:15:04.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.682 "is_configured": false, 00:15:04.682 "data_offset": 0, 00:15:04.682 "data_size": 0 00:15:04.682 }, 00:15:04.682 { 00:15:04.682 "name": null, 00:15:04.682 "uuid": "026794f0-7a6f-4d47-b35d-3a10486f62ba", 00:15:04.682 "is_configured": false, 00:15:04.682 "data_offset": 0, 00:15:04.682 "data_size": 63488 00:15:04.682 }, 00:15:04.682 { 00:15:04.682 "name": "BaseBdev3", 00:15:04.682 "uuid": "99075526-df10-45c7-a1c5-34c3218d8bad", 00:15:04.682 "is_configured": true, 00:15:04.682 "data_offset": 2048, 00:15:04.682 "data_size": 63488 00:15:04.682 } 00:15:04.682 ] 00:15:04.682 }' 00:15:04.682 14:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.682 14:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.252 14:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.252 14:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.252 14:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.252 14:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:05.252 14:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.252 14:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:05.252 14:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:05.252 14:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.252 14:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.252 [2024-12-09 14:47:43.268832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:05.252 BaseBdev1 00:15:05.252 14:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.252 14:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:05.252 14:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:05.252 14:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:05.252 14:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:05.252 14:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:05.252 14:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:05.252 14:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:05.252 14:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.252 14:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.252 14:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.252 14:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:05.252 14:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.252 14:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.252 [ 00:15:05.252 { 00:15:05.252 "name": "BaseBdev1", 00:15:05.252 "aliases": [ 00:15:05.252 "23a06286-dbc8-4064-ac65-1700deaf3e7a" 00:15:05.252 ], 00:15:05.252 "product_name": "Malloc disk", 00:15:05.252 "block_size": 512, 00:15:05.252 "num_blocks": 65536, 00:15:05.252 "uuid": "23a06286-dbc8-4064-ac65-1700deaf3e7a", 00:15:05.252 "assigned_rate_limits": { 00:15:05.252 "rw_ios_per_sec": 0, 00:15:05.252 "rw_mbytes_per_sec": 0, 00:15:05.252 "r_mbytes_per_sec": 0, 00:15:05.252 "w_mbytes_per_sec": 0 00:15:05.252 }, 00:15:05.252 "claimed": true, 00:15:05.252 "claim_type": "exclusive_write", 00:15:05.252 "zoned": false, 00:15:05.252 "supported_io_types": { 00:15:05.252 "read": true, 00:15:05.252 "write": true, 00:15:05.252 "unmap": true, 00:15:05.252 "flush": true, 00:15:05.252 "reset": true, 00:15:05.252 "nvme_admin": false, 00:15:05.252 "nvme_io": false, 00:15:05.252 "nvme_io_md": false, 00:15:05.252 "write_zeroes": true, 00:15:05.252 "zcopy": true, 00:15:05.252 "get_zone_info": false, 00:15:05.252 "zone_management": false, 00:15:05.252 "zone_append": false, 00:15:05.252 "compare": false, 00:15:05.252 "compare_and_write": false, 00:15:05.252 "abort": true, 00:15:05.252 "seek_hole": false, 00:15:05.252 "seek_data": false, 00:15:05.252 "copy": true, 00:15:05.252 "nvme_iov_md": false 00:15:05.252 }, 00:15:05.252 "memory_domains": [ 00:15:05.252 { 00:15:05.252 "dma_device_id": "system", 00:15:05.252 "dma_device_type": 1 00:15:05.252 }, 00:15:05.252 { 00:15:05.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.252 "dma_device_type": 2 00:15:05.252 } 00:15:05.252 ], 00:15:05.252 "driver_specific": {} 00:15:05.252 } 00:15:05.252 ] 00:15:05.252 14:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.252 14:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:05.252 14:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:05.252 14:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:05.252 14:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:05.252 14:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.252 14:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.252 14:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:05.252 14:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.252 14:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.252 14:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.252 14:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.252 14:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.252 14:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.252 14:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.252 14:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.252 14:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.252 14:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.252 "name": "Existed_Raid", 00:15:05.252 "uuid": "00b231ec-5f96-4af1-95f0-9b5b36c61598", 00:15:05.253 "strip_size_kb": 64, 00:15:05.253 "state": "configuring", 00:15:05.253 "raid_level": "raid5f", 00:15:05.253 "superblock": true, 00:15:05.253 "num_base_bdevs": 3, 00:15:05.253 "num_base_bdevs_discovered": 2, 00:15:05.253 "num_base_bdevs_operational": 3, 00:15:05.253 "base_bdevs_list": [ 00:15:05.253 { 00:15:05.253 "name": "BaseBdev1", 00:15:05.253 "uuid": "23a06286-dbc8-4064-ac65-1700deaf3e7a", 00:15:05.253 "is_configured": true, 00:15:05.253 "data_offset": 2048, 00:15:05.253 "data_size": 63488 00:15:05.253 }, 00:15:05.253 { 00:15:05.253 "name": null, 00:15:05.253 "uuid": "026794f0-7a6f-4d47-b35d-3a10486f62ba", 00:15:05.253 "is_configured": false, 00:15:05.253 "data_offset": 0, 00:15:05.253 "data_size": 63488 00:15:05.253 }, 00:15:05.253 { 00:15:05.253 "name": "BaseBdev3", 00:15:05.253 "uuid": "99075526-df10-45c7-a1c5-34c3218d8bad", 00:15:05.253 "is_configured": true, 00:15:05.253 "data_offset": 2048, 00:15:05.253 "data_size": 63488 00:15:05.253 } 00:15:05.253 ] 00:15:05.253 }' 00:15:05.253 14:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.253 14:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.822 14:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:05.822 14:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.822 14:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.822 14:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.822 14:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.822 14:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:05.822 14:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:05.822 14:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.822 14:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.822 [2024-12-09 14:47:43.859870] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:05.822 14:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.822 14:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:05.822 14:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:05.822 14:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:05.822 14:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.822 14:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.822 14:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:05.822 14:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.822 14:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.822 14:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.822 14:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.822 14:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.822 14:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.822 14:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.822 14:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.822 14:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.822 14:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.822 "name": "Existed_Raid", 00:15:05.822 "uuid": "00b231ec-5f96-4af1-95f0-9b5b36c61598", 00:15:05.822 "strip_size_kb": 64, 00:15:05.822 "state": "configuring", 00:15:05.822 "raid_level": "raid5f", 00:15:05.822 "superblock": true, 00:15:05.822 "num_base_bdevs": 3, 00:15:05.822 "num_base_bdevs_discovered": 1, 00:15:05.822 "num_base_bdevs_operational": 3, 00:15:05.822 "base_bdevs_list": [ 00:15:05.822 { 00:15:05.822 "name": "BaseBdev1", 00:15:05.822 "uuid": "23a06286-dbc8-4064-ac65-1700deaf3e7a", 00:15:05.822 "is_configured": true, 00:15:05.822 "data_offset": 2048, 00:15:05.822 "data_size": 63488 00:15:05.822 }, 00:15:05.822 { 00:15:05.822 "name": null, 00:15:05.822 "uuid": "026794f0-7a6f-4d47-b35d-3a10486f62ba", 00:15:05.822 "is_configured": false, 00:15:05.822 "data_offset": 0, 00:15:05.822 "data_size": 63488 00:15:05.822 }, 00:15:05.822 { 00:15:05.822 "name": null, 00:15:05.822 "uuid": "99075526-df10-45c7-a1c5-34c3218d8bad", 00:15:05.822 "is_configured": false, 00:15:05.822 "data_offset": 0, 00:15:05.822 "data_size": 63488 00:15:05.822 } 00:15:05.822 ] 00:15:05.822 }' 00:15:05.822 14:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.822 14:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.391 14:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.391 14:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.391 14:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:06.391 14:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.391 14:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.391 14:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:06.391 14:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:06.391 14:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.391 14:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.391 [2024-12-09 14:47:44.359093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:06.391 14:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.391 14:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:06.391 14:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:06.391 14:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:06.391 14:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.391 14:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.391 14:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:06.391 14:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.391 14:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.391 14:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.391 14:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.391 14:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.391 14:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.391 14:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.391 14:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.391 14:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.391 14:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.391 "name": "Existed_Raid", 00:15:06.391 "uuid": "00b231ec-5f96-4af1-95f0-9b5b36c61598", 00:15:06.391 "strip_size_kb": 64, 00:15:06.391 "state": "configuring", 00:15:06.391 "raid_level": "raid5f", 00:15:06.391 "superblock": true, 00:15:06.391 "num_base_bdevs": 3, 00:15:06.391 "num_base_bdevs_discovered": 2, 00:15:06.391 "num_base_bdevs_operational": 3, 00:15:06.391 "base_bdevs_list": [ 00:15:06.391 { 00:15:06.391 "name": "BaseBdev1", 00:15:06.391 "uuid": "23a06286-dbc8-4064-ac65-1700deaf3e7a", 00:15:06.391 "is_configured": true, 00:15:06.391 "data_offset": 2048, 00:15:06.391 "data_size": 63488 00:15:06.391 }, 00:15:06.391 { 00:15:06.391 "name": null, 00:15:06.391 "uuid": "026794f0-7a6f-4d47-b35d-3a10486f62ba", 00:15:06.391 "is_configured": false, 00:15:06.391 "data_offset": 0, 00:15:06.391 "data_size": 63488 00:15:06.391 }, 00:15:06.391 { 00:15:06.391 "name": "BaseBdev3", 00:15:06.391 "uuid": "99075526-df10-45c7-a1c5-34c3218d8bad", 00:15:06.391 "is_configured": true, 00:15:06.391 "data_offset": 2048, 00:15:06.391 "data_size": 63488 00:15:06.391 } 00:15:06.391 ] 00:15:06.391 }' 00:15:06.391 14:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.391 14:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.650 14:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:06.650 14:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.650 14:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.650 14:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.650 14:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.910 14:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:06.910 14:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:06.910 14:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.910 14:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.910 [2024-12-09 14:47:44.782346] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:06.910 14:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.910 14:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:06.910 14:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:06.910 14:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:06.910 14:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.910 14:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.910 14:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:06.910 14:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.910 14:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.910 14:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.910 14:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.910 14:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.910 14:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.910 14:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.910 14:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.910 14:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.910 14:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.910 "name": "Existed_Raid", 00:15:06.910 "uuid": "00b231ec-5f96-4af1-95f0-9b5b36c61598", 00:15:06.910 "strip_size_kb": 64, 00:15:06.910 "state": "configuring", 00:15:06.910 "raid_level": "raid5f", 00:15:06.910 "superblock": true, 00:15:06.910 "num_base_bdevs": 3, 00:15:06.910 "num_base_bdevs_discovered": 1, 00:15:06.910 "num_base_bdevs_operational": 3, 00:15:06.910 "base_bdevs_list": [ 00:15:06.910 { 00:15:06.910 "name": null, 00:15:06.910 "uuid": "23a06286-dbc8-4064-ac65-1700deaf3e7a", 00:15:06.910 "is_configured": false, 00:15:06.910 "data_offset": 0, 00:15:06.910 "data_size": 63488 00:15:06.910 }, 00:15:06.910 { 00:15:06.910 "name": null, 00:15:06.910 "uuid": "026794f0-7a6f-4d47-b35d-3a10486f62ba", 00:15:06.910 "is_configured": false, 00:15:06.910 "data_offset": 0, 00:15:06.910 "data_size": 63488 00:15:06.910 }, 00:15:06.910 { 00:15:06.910 "name": "BaseBdev3", 00:15:06.910 "uuid": "99075526-df10-45c7-a1c5-34c3218d8bad", 00:15:06.910 "is_configured": true, 00:15:06.910 "data_offset": 2048, 00:15:06.910 "data_size": 63488 00:15:06.910 } 00:15:06.910 ] 00:15:06.910 }' 00:15:06.910 14:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.910 14:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.478 14:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.478 14:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:07.478 14:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.478 14:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.478 14:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.478 14:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:07.478 14:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:07.478 14:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.478 14:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.478 [2024-12-09 14:47:45.383113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:07.478 14:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.478 14:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:07.478 14:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:07.478 14:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:07.478 14:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:07.478 14:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:07.478 14:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:07.478 14:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.478 14:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.478 14:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.478 14:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.478 14:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.478 14:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.478 14:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.478 14:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.478 14:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.478 14:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.478 "name": "Existed_Raid", 00:15:07.478 "uuid": "00b231ec-5f96-4af1-95f0-9b5b36c61598", 00:15:07.478 "strip_size_kb": 64, 00:15:07.478 "state": "configuring", 00:15:07.478 "raid_level": "raid5f", 00:15:07.478 "superblock": true, 00:15:07.478 "num_base_bdevs": 3, 00:15:07.478 "num_base_bdevs_discovered": 2, 00:15:07.478 "num_base_bdevs_operational": 3, 00:15:07.478 "base_bdevs_list": [ 00:15:07.478 { 00:15:07.478 "name": null, 00:15:07.478 "uuid": "23a06286-dbc8-4064-ac65-1700deaf3e7a", 00:15:07.478 "is_configured": false, 00:15:07.478 "data_offset": 0, 00:15:07.478 "data_size": 63488 00:15:07.478 }, 00:15:07.478 { 00:15:07.478 "name": "BaseBdev2", 00:15:07.478 "uuid": "026794f0-7a6f-4d47-b35d-3a10486f62ba", 00:15:07.478 "is_configured": true, 00:15:07.478 "data_offset": 2048, 00:15:07.478 "data_size": 63488 00:15:07.478 }, 00:15:07.478 { 00:15:07.478 "name": "BaseBdev3", 00:15:07.478 "uuid": "99075526-df10-45c7-a1c5-34c3218d8bad", 00:15:07.478 "is_configured": true, 00:15:07.478 "data_offset": 2048, 00:15:07.478 "data_size": 63488 00:15:07.478 } 00:15:07.478 ] 00:15:07.478 }' 00:15:07.478 14:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.478 14:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.050 14:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.050 14:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:08.050 14:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.050 14:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.050 14:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.050 14:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:08.050 14:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.050 14:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:08.050 14:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.050 14:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.050 14:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.050 14:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 23a06286-dbc8-4064-ac65-1700deaf3e7a 00:15:08.050 14:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.050 14:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.050 [2024-12-09 14:47:45.993192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:08.050 [2024-12-09 14:47:45.993496] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:08.050 [2024-12-09 14:47:45.993553] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:08.050 [2024-12-09 14:47:45.993844] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:08.050 NewBaseBdev 00:15:08.050 14:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.050 14:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:08.050 14:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:08.050 14:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:08.050 14:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:08.050 14:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:08.050 14:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:08.050 14:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:08.050 14:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.050 14:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.050 [2024-12-09 14:47:45.999156] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:08.050 [2024-12-09 14:47:45.999237] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:08.050 [2024-12-09 14:47:45.999443] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.050 14:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.050 14:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:08.050 14:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.050 14:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.050 [ 00:15:08.050 { 00:15:08.050 "name": "NewBaseBdev", 00:15:08.050 "aliases": [ 00:15:08.050 "23a06286-dbc8-4064-ac65-1700deaf3e7a" 00:15:08.050 ], 00:15:08.050 "product_name": "Malloc disk", 00:15:08.050 "block_size": 512, 00:15:08.050 "num_blocks": 65536, 00:15:08.050 "uuid": "23a06286-dbc8-4064-ac65-1700deaf3e7a", 00:15:08.050 "assigned_rate_limits": { 00:15:08.050 "rw_ios_per_sec": 0, 00:15:08.050 "rw_mbytes_per_sec": 0, 00:15:08.050 "r_mbytes_per_sec": 0, 00:15:08.050 "w_mbytes_per_sec": 0 00:15:08.050 }, 00:15:08.050 "claimed": true, 00:15:08.050 "claim_type": "exclusive_write", 00:15:08.050 "zoned": false, 00:15:08.050 "supported_io_types": { 00:15:08.050 "read": true, 00:15:08.050 "write": true, 00:15:08.050 "unmap": true, 00:15:08.050 "flush": true, 00:15:08.050 "reset": true, 00:15:08.050 "nvme_admin": false, 00:15:08.050 "nvme_io": false, 00:15:08.050 "nvme_io_md": false, 00:15:08.050 "write_zeroes": true, 00:15:08.050 "zcopy": true, 00:15:08.050 "get_zone_info": false, 00:15:08.050 "zone_management": false, 00:15:08.050 "zone_append": false, 00:15:08.050 "compare": false, 00:15:08.050 "compare_and_write": false, 00:15:08.050 "abort": true, 00:15:08.050 "seek_hole": false, 00:15:08.050 "seek_data": false, 00:15:08.050 "copy": true, 00:15:08.050 "nvme_iov_md": false 00:15:08.050 }, 00:15:08.050 "memory_domains": [ 00:15:08.050 { 00:15:08.050 "dma_device_id": "system", 00:15:08.050 "dma_device_type": 1 00:15:08.050 }, 00:15:08.050 { 00:15:08.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.050 "dma_device_type": 2 00:15:08.050 } 00:15:08.050 ], 00:15:08.050 "driver_specific": {} 00:15:08.050 } 00:15:08.050 ] 00:15:08.050 14:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.050 14:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:08.050 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:08.050 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:08.050 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.050 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:08.050 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.050 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:08.050 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.050 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.050 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.050 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.050 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.050 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.050 14:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.050 14:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.050 14:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.050 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.050 "name": "Existed_Raid", 00:15:08.050 "uuid": "00b231ec-5f96-4af1-95f0-9b5b36c61598", 00:15:08.050 "strip_size_kb": 64, 00:15:08.050 "state": "online", 00:15:08.050 "raid_level": "raid5f", 00:15:08.050 "superblock": true, 00:15:08.050 "num_base_bdevs": 3, 00:15:08.050 "num_base_bdevs_discovered": 3, 00:15:08.050 "num_base_bdevs_operational": 3, 00:15:08.050 "base_bdevs_list": [ 00:15:08.050 { 00:15:08.050 "name": "NewBaseBdev", 00:15:08.050 "uuid": "23a06286-dbc8-4064-ac65-1700deaf3e7a", 00:15:08.050 "is_configured": true, 00:15:08.050 "data_offset": 2048, 00:15:08.050 "data_size": 63488 00:15:08.050 }, 00:15:08.050 { 00:15:08.050 "name": "BaseBdev2", 00:15:08.050 "uuid": "026794f0-7a6f-4d47-b35d-3a10486f62ba", 00:15:08.050 "is_configured": true, 00:15:08.050 "data_offset": 2048, 00:15:08.050 "data_size": 63488 00:15:08.050 }, 00:15:08.050 { 00:15:08.050 "name": "BaseBdev3", 00:15:08.050 "uuid": "99075526-df10-45c7-a1c5-34c3218d8bad", 00:15:08.050 "is_configured": true, 00:15:08.050 "data_offset": 2048, 00:15:08.050 "data_size": 63488 00:15:08.050 } 00:15:08.050 ] 00:15:08.050 }' 00:15:08.050 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.050 14:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.619 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:08.619 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:08.619 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:08.619 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:08.619 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:08.619 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:08.619 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:08.619 14:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.619 14:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.619 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:08.619 [2024-12-09 14:47:46.504879] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:08.619 14:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.619 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:08.619 "name": "Existed_Raid", 00:15:08.619 "aliases": [ 00:15:08.619 "00b231ec-5f96-4af1-95f0-9b5b36c61598" 00:15:08.619 ], 00:15:08.619 "product_name": "Raid Volume", 00:15:08.619 "block_size": 512, 00:15:08.619 "num_blocks": 126976, 00:15:08.619 "uuid": "00b231ec-5f96-4af1-95f0-9b5b36c61598", 00:15:08.619 "assigned_rate_limits": { 00:15:08.619 "rw_ios_per_sec": 0, 00:15:08.619 "rw_mbytes_per_sec": 0, 00:15:08.619 "r_mbytes_per_sec": 0, 00:15:08.619 "w_mbytes_per_sec": 0 00:15:08.619 }, 00:15:08.619 "claimed": false, 00:15:08.619 "zoned": false, 00:15:08.619 "supported_io_types": { 00:15:08.619 "read": true, 00:15:08.619 "write": true, 00:15:08.619 "unmap": false, 00:15:08.619 "flush": false, 00:15:08.619 "reset": true, 00:15:08.619 "nvme_admin": false, 00:15:08.619 "nvme_io": false, 00:15:08.619 "nvme_io_md": false, 00:15:08.619 "write_zeroes": true, 00:15:08.619 "zcopy": false, 00:15:08.619 "get_zone_info": false, 00:15:08.619 "zone_management": false, 00:15:08.619 "zone_append": false, 00:15:08.619 "compare": false, 00:15:08.619 "compare_and_write": false, 00:15:08.619 "abort": false, 00:15:08.619 "seek_hole": false, 00:15:08.619 "seek_data": false, 00:15:08.619 "copy": false, 00:15:08.619 "nvme_iov_md": false 00:15:08.619 }, 00:15:08.619 "driver_specific": { 00:15:08.619 "raid": { 00:15:08.619 "uuid": "00b231ec-5f96-4af1-95f0-9b5b36c61598", 00:15:08.619 "strip_size_kb": 64, 00:15:08.619 "state": "online", 00:15:08.619 "raid_level": "raid5f", 00:15:08.619 "superblock": true, 00:15:08.619 "num_base_bdevs": 3, 00:15:08.619 "num_base_bdevs_discovered": 3, 00:15:08.619 "num_base_bdevs_operational": 3, 00:15:08.619 "base_bdevs_list": [ 00:15:08.619 { 00:15:08.619 "name": "NewBaseBdev", 00:15:08.619 "uuid": "23a06286-dbc8-4064-ac65-1700deaf3e7a", 00:15:08.619 "is_configured": true, 00:15:08.619 "data_offset": 2048, 00:15:08.619 "data_size": 63488 00:15:08.619 }, 00:15:08.619 { 00:15:08.619 "name": "BaseBdev2", 00:15:08.619 "uuid": "026794f0-7a6f-4d47-b35d-3a10486f62ba", 00:15:08.619 "is_configured": true, 00:15:08.619 "data_offset": 2048, 00:15:08.619 "data_size": 63488 00:15:08.619 }, 00:15:08.619 { 00:15:08.619 "name": "BaseBdev3", 00:15:08.619 "uuid": "99075526-df10-45c7-a1c5-34c3218d8bad", 00:15:08.619 "is_configured": true, 00:15:08.619 "data_offset": 2048, 00:15:08.619 "data_size": 63488 00:15:08.619 } 00:15:08.619 ] 00:15:08.619 } 00:15:08.619 } 00:15:08.619 }' 00:15:08.619 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:08.619 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:08.619 BaseBdev2 00:15:08.619 BaseBdev3' 00:15:08.619 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.620 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:08.620 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:08.620 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:08.620 14:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.620 14:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.620 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.620 14:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.620 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:08.620 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:08.620 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:08.620 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:08.620 14:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.620 14:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.620 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.620 14:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.620 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:08.620 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:08.620 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:08.879 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:08.879 14:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.879 14:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.879 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.879 14:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.879 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:08.879 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:08.879 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:08.879 14:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.879 14:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.879 [2024-12-09 14:47:46.796210] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:08.879 [2024-12-09 14:47:46.796288] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:08.879 [2024-12-09 14:47:46.796448] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:08.879 [2024-12-09 14:47:46.796779] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:08.879 [2024-12-09 14:47:46.796838] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:08.879 14:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.879 14:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81827 00:15:08.879 14:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 81827 ']' 00:15:08.879 14:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 81827 00:15:08.879 14:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:08.879 14:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:08.879 14:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81827 00:15:08.880 14:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:08.880 14:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:08.880 14:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81827' 00:15:08.880 killing process with pid 81827 00:15:08.880 14:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 81827 00:15:08.880 [2024-12-09 14:47:46.842932] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:08.880 14:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 81827 00:15:09.139 [2024-12-09 14:47:47.139326] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:10.555 14:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:10.555 00:15:10.555 real 0m10.963s 00:15:10.555 user 0m17.492s 00:15:10.555 sys 0m1.935s 00:15:10.555 14:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:10.555 ************************************ 00:15:10.555 END TEST raid5f_state_function_test_sb 00:15:10.555 ************************************ 00:15:10.555 14:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.555 14:47:48 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:15:10.555 14:47:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:10.555 14:47:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:10.555 14:47:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:10.555 ************************************ 00:15:10.555 START TEST raid5f_superblock_test 00:15:10.555 ************************************ 00:15:10.555 14:47:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:15:10.555 14:47:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:10.555 14:47:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:10.555 14:47:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:10.555 14:47:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:10.555 14:47:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:10.555 14:47:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:10.555 14:47:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:10.555 14:47:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:10.555 14:47:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:10.555 14:47:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:10.555 14:47:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:10.555 14:47:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:10.555 14:47:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:10.555 14:47:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:10.555 14:47:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:10.555 14:47:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:10.555 14:47:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=82449 00:15:10.555 14:47:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:10.555 14:47:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 82449 00:15:10.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.556 14:47:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 82449 ']' 00:15:10.556 14:47:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.556 14:47:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:10.556 14:47:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.556 14:47:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:10.556 14:47:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.556 [2024-12-09 14:47:48.427477] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:15:10.556 [2024-12-09 14:47:48.427709] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82449 ] 00:15:10.556 [2024-12-09 14:47:48.618508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.815 [2024-12-09 14:47:48.737534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.815 [2024-12-09 14:47:48.930222] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:10.815 [2024-12-09 14:47:48.930284] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.386 malloc1 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.386 [2024-12-09 14:47:49.309150] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:11.386 [2024-12-09 14:47:49.309252] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.386 [2024-12-09 14:47:49.309290] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:11.386 [2024-12-09 14:47:49.309329] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.386 [2024-12-09 14:47:49.311464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.386 [2024-12-09 14:47:49.311538] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:11.386 pt1 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.386 malloc2 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.386 [2024-12-09 14:47:49.366698] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:11.386 [2024-12-09 14:47:49.366817] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.386 [2024-12-09 14:47:49.366861] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:11.386 [2024-12-09 14:47:49.366890] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.386 [2024-12-09 14:47:49.369151] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.386 [2024-12-09 14:47:49.369219] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:11.386 pt2 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.386 malloc3 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.386 [2024-12-09 14:47:49.429334] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:11.386 [2024-12-09 14:47:49.429425] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.386 [2024-12-09 14:47:49.429479] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:11.386 [2024-12-09 14:47:49.429508] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.386 [2024-12-09 14:47:49.431536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.386 [2024-12-09 14:47:49.431620] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:11.386 pt3 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.386 [2024-12-09 14:47:49.437373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:11.386 [2024-12-09 14:47:49.439127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:11.386 [2024-12-09 14:47:49.439270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:11.386 [2024-12-09 14:47:49.439499] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:11.386 [2024-12-09 14:47:49.439560] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:11.386 [2024-12-09 14:47:49.439860] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:11.386 [2024-12-09 14:47:49.445713] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:11.386 [2024-12-09 14:47:49.445783] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:11.386 [2024-12-09 14:47:49.446085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.386 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.386 "name": "raid_bdev1", 00:15:11.386 "uuid": "a7a2f851-6a54-4e02-bb6a-46ae71a5ec25", 00:15:11.386 "strip_size_kb": 64, 00:15:11.386 "state": "online", 00:15:11.386 "raid_level": "raid5f", 00:15:11.386 "superblock": true, 00:15:11.386 "num_base_bdevs": 3, 00:15:11.386 "num_base_bdevs_discovered": 3, 00:15:11.386 "num_base_bdevs_operational": 3, 00:15:11.386 "base_bdevs_list": [ 00:15:11.386 { 00:15:11.386 "name": "pt1", 00:15:11.386 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:11.386 "is_configured": true, 00:15:11.386 "data_offset": 2048, 00:15:11.386 "data_size": 63488 00:15:11.386 }, 00:15:11.387 { 00:15:11.387 "name": "pt2", 00:15:11.387 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:11.387 "is_configured": true, 00:15:11.387 "data_offset": 2048, 00:15:11.387 "data_size": 63488 00:15:11.387 }, 00:15:11.387 { 00:15:11.387 "name": "pt3", 00:15:11.387 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:11.387 "is_configured": true, 00:15:11.387 "data_offset": 2048, 00:15:11.387 "data_size": 63488 00:15:11.387 } 00:15:11.387 ] 00:15:11.387 }' 00:15:11.387 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.387 14:47:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.955 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:11.955 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:11.955 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:11.955 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:11.955 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:11.955 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:11.955 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:11.955 14:47:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.955 14:47:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.955 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:11.955 [2024-12-09 14:47:49.884467] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:11.955 14:47:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.955 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:11.955 "name": "raid_bdev1", 00:15:11.955 "aliases": [ 00:15:11.955 "a7a2f851-6a54-4e02-bb6a-46ae71a5ec25" 00:15:11.955 ], 00:15:11.955 "product_name": "Raid Volume", 00:15:11.955 "block_size": 512, 00:15:11.955 "num_blocks": 126976, 00:15:11.955 "uuid": "a7a2f851-6a54-4e02-bb6a-46ae71a5ec25", 00:15:11.955 "assigned_rate_limits": { 00:15:11.955 "rw_ios_per_sec": 0, 00:15:11.955 "rw_mbytes_per_sec": 0, 00:15:11.955 "r_mbytes_per_sec": 0, 00:15:11.955 "w_mbytes_per_sec": 0 00:15:11.955 }, 00:15:11.955 "claimed": false, 00:15:11.955 "zoned": false, 00:15:11.955 "supported_io_types": { 00:15:11.955 "read": true, 00:15:11.955 "write": true, 00:15:11.955 "unmap": false, 00:15:11.955 "flush": false, 00:15:11.955 "reset": true, 00:15:11.955 "nvme_admin": false, 00:15:11.955 "nvme_io": false, 00:15:11.955 "nvme_io_md": false, 00:15:11.955 "write_zeroes": true, 00:15:11.955 "zcopy": false, 00:15:11.955 "get_zone_info": false, 00:15:11.955 "zone_management": false, 00:15:11.955 "zone_append": false, 00:15:11.955 "compare": false, 00:15:11.955 "compare_and_write": false, 00:15:11.955 "abort": false, 00:15:11.955 "seek_hole": false, 00:15:11.955 "seek_data": false, 00:15:11.955 "copy": false, 00:15:11.955 "nvme_iov_md": false 00:15:11.955 }, 00:15:11.955 "driver_specific": { 00:15:11.956 "raid": { 00:15:11.956 "uuid": "a7a2f851-6a54-4e02-bb6a-46ae71a5ec25", 00:15:11.956 "strip_size_kb": 64, 00:15:11.956 "state": "online", 00:15:11.956 "raid_level": "raid5f", 00:15:11.956 "superblock": true, 00:15:11.956 "num_base_bdevs": 3, 00:15:11.956 "num_base_bdevs_discovered": 3, 00:15:11.956 "num_base_bdevs_operational": 3, 00:15:11.956 "base_bdevs_list": [ 00:15:11.956 { 00:15:11.956 "name": "pt1", 00:15:11.956 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:11.956 "is_configured": true, 00:15:11.956 "data_offset": 2048, 00:15:11.956 "data_size": 63488 00:15:11.956 }, 00:15:11.956 { 00:15:11.956 "name": "pt2", 00:15:11.956 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:11.956 "is_configured": true, 00:15:11.956 "data_offset": 2048, 00:15:11.956 "data_size": 63488 00:15:11.956 }, 00:15:11.956 { 00:15:11.956 "name": "pt3", 00:15:11.956 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:11.956 "is_configured": true, 00:15:11.956 "data_offset": 2048, 00:15:11.956 "data_size": 63488 00:15:11.956 } 00:15:11.956 ] 00:15:11.956 } 00:15:11.956 } 00:15:11.956 }' 00:15:11.956 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:11.956 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:11.956 pt2 00:15:11.956 pt3' 00:15:11.956 14:47:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.956 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:11.956 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:11.956 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:11.956 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.956 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.956 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.956 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:12.215 [2024-12-09 14:47:50.171905] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a7a2f851-6a54-4e02-bb6a-46ae71a5ec25 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a7a2f851-6a54-4e02-bb6a-46ae71a5ec25 ']' 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.215 [2024-12-09 14:47:50.219658] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:12.215 [2024-12-09 14:47:50.219741] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:12.215 [2024-12-09 14:47:50.219849] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:12.215 [2024-12-09 14:47:50.219955] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:12.215 [2024-12-09 14:47:50.220002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:12.215 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.476 [2024-12-09 14:47:50.367431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:12.476 [2024-12-09 14:47:50.369390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:12.476 [2024-12-09 14:47:50.369448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:12.476 [2024-12-09 14:47:50.369499] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:12.476 [2024-12-09 14:47:50.369551] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:12.476 [2024-12-09 14:47:50.369580] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:12.476 [2024-12-09 14:47:50.369598] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:12.476 [2024-12-09 14:47:50.369607] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:12.476 request: 00:15:12.476 { 00:15:12.476 "name": "raid_bdev1", 00:15:12.476 "raid_level": "raid5f", 00:15:12.476 "base_bdevs": [ 00:15:12.476 "malloc1", 00:15:12.476 "malloc2", 00:15:12.476 "malloc3" 00:15:12.476 ], 00:15:12.476 "strip_size_kb": 64, 00:15:12.476 "superblock": false, 00:15:12.476 "method": "bdev_raid_create", 00:15:12.476 "req_id": 1 00:15:12.476 } 00:15:12.476 Got JSON-RPC error response 00:15:12.476 response: 00:15:12.476 { 00:15:12.476 "code": -17, 00:15:12.476 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:12.476 } 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.476 [2024-12-09 14:47:50.427263] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:12.476 [2024-12-09 14:47:50.427358] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.476 [2024-12-09 14:47:50.427393] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:12.476 [2024-12-09 14:47:50.427419] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.476 [2024-12-09 14:47:50.429623] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.476 [2024-12-09 14:47:50.429686] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:12.476 [2024-12-09 14:47:50.429806] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:12.476 [2024-12-09 14:47:50.429894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:12.476 pt1 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.476 "name": "raid_bdev1", 00:15:12.476 "uuid": "a7a2f851-6a54-4e02-bb6a-46ae71a5ec25", 00:15:12.476 "strip_size_kb": 64, 00:15:12.476 "state": "configuring", 00:15:12.476 "raid_level": "raid5f", 00:15:12.476 "superblock": true, 00:15:12.476 "num_base_bdevs": 3, 00:15:12.476 "num_base_bdevs_discovered": 1, 00:15:12.476 "num_base_bdevs_operational": 3, 00:15:12.476 "base_bdevs_list": [ 00:15:12.476 { 00:15:12.476 "name": "pt1", 00:15:12.476 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:12.476 "is_configured": true, 00:15:12.476 "data_offset": 2048, 00:15:12.476 "data_size": 63488 00:15:12.476 }, 00:15:12.476 { 00:15:12.476 "name": null, 00:15:12.476 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:12.476 "is_configured": false, 00:15:12.476 "data_offset": 2048, 00:15:12.476 "data_size": 63488 00:15:12.476 }, 00:15:12.476 { 00:15:12.476 "name": null, 00:15:12.476 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:12.476 "is_configured": false, 00:15:12.476 "data_offset": 2048, 00:15:12.476 "data_size": 63488 00:15:12.476 } 00:15:12.476 ] 00:15:12.476 }' 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.476 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.046 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:13.046 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:13.046 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.046 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.046 [2024-12-09 14:47:50.906451] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:13.046 [2024-12-09 14:47:50.906557] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.046 [2024-12-09 14:47:50.906606] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:13.046 [2024-12-09 14:47:50.906635] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.046 [2024-12-09 14:47:50.907109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.046 [2024-12-09 14:47:50.907171] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:13.046 [2024-12-09 14:47:50.907300] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:13.046 [2024-12-09 14:47:50.907361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:13.046 pt2 00:15:13.046 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.046 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:13.046 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.046 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.046 [2024-12-09 14:47:50.918423] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:13.046 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.046 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:13.046 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.046 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:13.046 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:13.046 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.046 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:13.046 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.046 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.046 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.046 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.046 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.046 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.046 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.046 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.046 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.046 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.046 "name": "raid_bdev1", 00:15:13.046 "uuid": "a7a2f851-6a54-4e02-bb6a-46ae71a5ec25", 00:15:13.046 "strip_size_kb": 64, 00:15:13.046 "state": "configuring", 00:15:13.046 "raid_level": "raid5f", 00:15:13.046 "superblock": true, 00:15:13.046 "num_base_bdevs": 3, 00:15:13.046 "num_base_bdevs_discovered": 1, 00:15:13.046 "num_base_bdevs_operational": 3, 00:15:13.046 "base_bdevs_list": [ 00:15:13.046 { 00:15:13.046 "name": "pt1", 00:15:13.046 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:13.046 "is_configured": true, 00:15:13.046 "data_offset": 2048, 00:15:13.046 "data_size": 63488 00:15:13.046 }, 00:15:13.046 { 00:15:13.046 "name": null, 00:15:13.046 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:13.046 "is_configured": false, 00:15:13.046 "data_offset": 0, 00:15:13.046 "data_size": 63488 00:15:13.046 }, 00:15:13.046 { 00:15:13.047 "name": null, 00:15:13.047 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:13.047 "is_configured": false, 00:15:13.047 "data_offset": 2048, 00:15:13.047 "data_size": 63488 00:15:13.047 } 00:15:13.047 ] 00:15:13.047 }' 00:15:13.047 14:47:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.047 14:47:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.307 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:13.307 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:13.307 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:13.307 14:47:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.307 14:47:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.307 [2024-12-09 14:47:51.337719] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:13.307 [2024-12-09 14:47:51.337838] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.307 [2024-12-09 14:47:51.337861] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:13.307 [2024-12-09 14:47:51.337871] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.307 [2024-12-09 14:47:51.338345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.307 [2024-12-09 14:47:51.338366] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:13.307 [2024-12-09 14:47:51.338450] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:13.307 [2024-12-09 14:47:51.338474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:13.307 pt2 00:15:13.307 14:47:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.307 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:13.307 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:13.307 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:13.307 14:47:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.307 14:47:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.307 [2024-12-09 14:47:51.349658] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:13.307 [2024-12-09 14:47:51.349744] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.307 [2024-12-09 14:47:51.349774] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:13.307 [2024-12-09 14:47:51.349801] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.307 [2024-12-09 14:47:51.350204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.307 [2024-12-09 14:47:51.350265] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:13.307 [2024-12-09 14:47:51.350359] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:13.307 [2024-12-09 14:47:51.350408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:13.307 [2024-12-09 14:47:51.350563] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:13.307 [2024-12-09 14:47:51.350620] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:13.307 [2024-12-09 14:47:51.350869] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:13.307 [2024-12-09 14:47:51.356320] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:13.307 [2024-12-09 14:47:51.356373] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:13.307 [2024-12-09 14:47:51.356588] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.307 pt3 00:15:13.307 14:47:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.307 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:13.307 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:13.307 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:13.307 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.307 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.307 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:13.307 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.307 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:13.307 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.307 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.307 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.307 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.307 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.307 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.307 14:47:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.307 14:47:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.307 14:47:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.307 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.307 "name": "raid_bdev1", 00:15:13.307 "uuid": "a7a2f851-6a54-4e02-bb6a-46ae71a5ec25", 00:15:13.307 "strip_size_kb": 64, 00:15:13.307 "state": "online", 00:15:13.307 "raid_level": "raid5f", 00:15:13.307 "superblock": true, 00:15:13.307 "num_base_bdevs": 3, 00:15:13.307 "num_base_bdevs_discovered": 3, 00:15:13.307 "num_base_bdevs_operational": 3, 00:15:13.307 "base_bdevs_list": [ 00:15:13.307 { 00:15:13.307 "name": "pt1", 00:15:13.307 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:13.307 "is_configured": true, 00:15:13.307 "data_offset": 2048, 00:15:13.307 "data_size": 63488 00:15:13.307 }, 00:15:13.307 { 00:15:13.307 "name": "pt2", 00:15:13.307 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:13.307 "is_configured": true, 00:15:13.307 "data_offset": 2048, 00:15:13.307 "data_size": 63488 00:15:13.307 }, 00:15:13.307 { 00:15:13.307 "name": "pt3", 00:15:13.307 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:13.307 "is_configured": true, 00:15:13.307 "data_offset": 2048, 00:15:13.307 "data_size": 63488 00:15:13.307 } 00:15:13.307 ] 00:15:13.307 }' 00:15:13.307 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.307 14:47:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.876 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:13.876 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:13.876 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:13.876 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:13.876 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:13.876 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:13.876 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:13.876 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:13.876 14:47:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.876 14:47:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.876 [2024-12-09 14:47:51.806523] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:13.876 14:47:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.876 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:13.876 "name": "raid_bdev1", 00:15:13.876 "aliases": [ 00:15:13.876 "a7a2f851-6a54-4e02-bb6a-46ae71a5ec25" 00:15:13.876 ], 00:15:13.876 "product_name": "Raid Volume", 00:15:13.876 "block_size": 512, 00:15:13.876 "num_blocks": 126976, 00:15:13.876 "uuid": "a7a2f851-6a54-4e02-bb6a-46ae71a5ec25", 00:15:13.876 "assigned_rate_limits": { 00:15:13.876 "rw_ios_per_sec": 0, 00:15:13.876 "rw_mbytes_per_sec": 0, 00:15:13.876 "r_mbytes_per_sec": 0, 00:15:13.876 "w_mbytes_per_sec": 0 00:15:13.876 }, 00:15:13.876 "claimed": false, 00:15:13.876 "zoned": false, 00:15:13.876 "supported_io_types": { 00:15:13.876 "read": true, 00:15:13.876 "write": true, 00:15:13.876 "unmap": false, 00:15:13.876 "flush": false, 00:15:13.876 "reset": true, 00:15:13.876 "nvme_admin": false, 00:15:13.876 "nvme_io": false, 00:15:13.876 "nvme_io_md": false, 00:15:13.876 "write_zeroes": true, 00:15:13.876 "zcopy": false, 00:15:13.876 "get_zone_info": false, 00:15:13.876 "zone_management": false, 00:15:13.876 "zone_append": false, 00:15:13.876 "compare": false, 00:15:13.876 "compare_and_write": false, 00:15:13.876 "abort": false, 00:15:13.876 "seek_hole": false, 00:15:13.876 "seek_data": false, 00:15:13.876 "copy": false, 00:15:13.876 "nvme_iov_md": false 00:15:13.876 }, 00:15:13.876 "driver_specific": { 00:15:13.876 "raid": { 00:15:13.876 "uuid": "a7a2f851-6a54-4e02-bb6a-46ae71a5ec25", 00:15:13.876 "strip_size_kb": 64, 00:15:13.876 "state": "online", 00:15:13.876 "raid_level": "raid5f", 00:15:13.876 "superblock": true, 00:15:13.876 "num_base_bdevs": 3, 00:15:13.876 "num_base_bdevs_discovered": 3, 00:15:13.876 "num_base_bdevs_operational": 3, 00:15:13.876 "base_bdevs_list": [ 00:15:13.876 { 00:15:13.876 "name": "pt1", 00:15:13.876 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:13.876 "is_configured": true, 00:15:13.876 "data_offset": 2048, 00:15:13.876 "data_size": 63488 00:15:13.876 }, 00:15:13.876 { 00:15:13.876 "name": "pt2", 00:15:13.876 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:13.876 "is_configured": true, 00:15:13.876 "data_offset": 2048, 00:15:13.876 "data_size": 63488 00:15:13.876 }, 00:15:13.876 { 00:15:13.876 "name": "pt3", 00:15:13.876 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:13.876 "is_configured": true, 00:15:13.876 "data_offset": 2048, 00:15:13.876 "data_size": 63488 00:15:13.876 } 00:15:13.876 ] 00:15:13.876 } 00:15:13.876 } 00:15:13.876 }' 00:15:13.876 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:13.876 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:13.876 pt2 00:15:13.876 pt3' 00:15:13.876 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:13.876 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:13.876 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:13.876 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:13.876 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:13.876 14:47:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.876 14:47:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.876 14:47:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.876 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:13.876 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:13.876 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:13.876 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:13.876 14:47:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.876 14:47:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.876 14:47:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:14.136 14:47:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.136 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:14.136 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:14.136 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:14.136 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:14.136 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:14.136 14:47:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.136 14:47:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.136 14:47:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.136 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:14.136 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:14.136 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:14.136 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:14.136 14:47:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.136 14:47:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.136 [2024-12-09 14:47:52.094005] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:14.136 14:47:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.136 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a7a2f851-6a54-4e02-bb6a-46ae71a5ec25 '!=' a7a2f851-6a54-4e02-bb6a-46ae71a5ec25 ']' 00:15:14.136 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:14.136 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:14.136 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:14.136 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:14.136 14:47:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.136 14:47:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.136 [2024-12-09 14:47:52.141763] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:14.136 14:47:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.136 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:14.136 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.136 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.136 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.136 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.136 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:14.136 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.136 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.136 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.136 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.136 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.136 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.136 14:47:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.136 14:47:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.136 14:47:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.136 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.136 "name": "raid_bdev1", 00:15:14.136 "uuid": "a7a2f851-6a54-4e02-bb6a-46ae71a5ec25", 00:15:14.136 "strip_size_kb": 64, 00:15:14.136 "state": "online", 00:15:14.136 "raid_level": "raid5f", 00:15:14.136 "superblock": true, 00:15:14.136 "num_base_bdevs": 3, 00:15:14.136 "num_base_bdevs_discovered": 2, 00:15:14.136 "num_base_bdevs_operational": 2, 00:15:14.136 "base_bdevs_list": [ 00:15:14.136 { 00:15:14.136 "name": null, 00:15:14.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.136 "is_configured": false, 00:15:14.137 "data_offset": 0, 00:15:14.137 "data_size": 63488 00:15:14.137 }, 00:15:14.137 { 00:15:14.137 "name": "pt2", 00:15:14.137 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:14.137 "is_configured": true, 00:15:14.137 "data_offset": 2048, 00:15:14.137 "data_size": 63488 00:15:14.137 }, 00:15:14.137 { 00:15:14.137 "name": "pt3", 00:15:14.137 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:14.137 "is_configured": true, 00:15:14.137 "data_offset": 2048, 00:15:14.137 "data_size": 63488 00:15:14.137 } 00:15:14.137 ] 00:15:14.137 }' 00:15:14.137 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.137 14:47:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.706 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:14.706 14:47:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.706 14:47:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.706 [2024-12-09 14:47:52.573007] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:14.706 [2024-12-09 14:47:52.573077] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:14.706 [2024-12-09 14:47:52.573190] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:14.706 [2024-12-09 14:47:52.573267] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:14.706 [2024-12-09 14:47:52.573324] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:14.706 14:47:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.706 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.707 [2024-12-09 14:47:52.656821] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:14.707 [2024-12-09 14:47:52.656917] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.707 [2024-12-09 14:47:52.656949] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:14.707 [2024-12-09 14:47:52.656983] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.707 [2024-12-09 14:47:52.659157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.707 [2024-12-09 14:47:52.659239] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:14.707 [2024-12-09 14:47:52.659353] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:14.707 [2024-12-09 14:47:52.659427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:14.707 pt2 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.707 "name": "raid_bdev1", 00:15:14.707 "uuid": "a7a2f851-6a54-4e02-bb6a-46ae71a5ec25", 00:15:14.707 "strip_size_kb": 64, 00:15:14.707 "state": "configuring", 00:15:14.707 "raid_level": "raid5f", 00:15:14.707 "superblock": true, 00:15:14.707 "num_base_bdevs": 3, 00:15:14.707 "num_base_bdevs_discovered": 1, 00:15:14.707 "num_base_bdevs_operational": 2, 00:15:14.707 "base_bdevs_list": [ 00:15:14.707 { 00:15:14.707 "name": null, 00:15:14.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.707 "is_configured": false, 00:15:14.707 "data_offset": 2048, 00:15:14.707 "data_size": 63488 00:15:14.707 }, 00:15:14.707 { 00:15:14.707 "name": "pt2", 00:15:14.707 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:14.707 "is_configured": true, 00:15:14.707 "data_offset": 2048, 00:15:14.707 "data_size": 63488 00:15:14.707 }, 00:15:14.707 { 00:15:14.707 "name": null, 00:15:14.707 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:14.707 "is_configured": false, 00:15:14.707 "data_offset": 2048, 00:15:14.707 "data_size": 63488 00:15:14.707 } 00:15:14.707 ] 00:15:14.707 }' 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.707 14:47:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.277 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:15.277 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:15.277 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:15:15.277 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:15.277 14:47:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.277 14:47:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.277 [2024-12-09 14:47:53.112112] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:15.277 [2024-12-09 14:47:53.112245] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.277 [2024-12-09 14:47:53.112288] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:15.277 [2024-12-09 14:47:53.112322] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.277 [2024-12-09 14:47:53.112840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.277 [2024-12-09 14:47:53.112902] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:15.277 [2024-12-09 14:47:53.113013] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:15.277 [2024-12-09 14:47:53.113046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:15.277 [2024-12-09 14:47:53.113167] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:15.277 [2024-12-09 14:47:53.113178] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:15.277 [2024-12-09 14:47:53.113435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:15.277 [2024-12-09 14:47:53.118792] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:15.277 [2024-12-09 14:47:53.118848] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:15.277 [2024-12-09 14:47:53.119228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.277 pt3 00:15:15.277 14:47:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.277 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:15.277 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.277 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.277 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:15.277 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.277 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:15.277 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.277 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.277 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.277 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.277 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.277 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.277 14:47:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.277 14:47:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.277 14:47:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.277 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.277 "name": "raid_bdev1", 00:15:15.277 "uuid": "a7a2f851-6a54-4e02-bb6a-46ae71a5ec25", 00:15:15.277 "strip_size_kb": 64, 00:15:15.277 "state": "online", 00:15:15.277 "raid_level": "raid5f", 00:15:15.277 "superblock": true, 00:15:15.277 "num_base_bdevs": 3, 00:15:15.277 "num_base_bdevs_discovered": 2, 00:15:15.277 "num_base_bdevs_operational": 2, 00:15:15.277 "base_bdevs_list": [ 00:15:15.277 { 00:15:15.277 "name": null, 00:15:15.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.277 "is_configured": false, 00:15:15.277 "data_offset": 2048, 00:15:15.277 "data_size": 63488 00:15:15.277 }, 00:15:15.277 { 00:15:15.277 "name": "pt2", 00:15:15.277 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:15.277 "is_configured": true, 00:15:15.277 "data_offset": 2048, 00:15:15.277 "data_size": 63488 00:15:15.277 }, 00:15:15.277 { 00:15:15.277 "name": "pt3", 00:15:15.277 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:15.277 "is_configured": true, 00:15:15.277 "data_offset": 2048, 00:15:15.277 "data_size": 63488 00:15:15.277 } 00:15:15.277 ] 00:15:15.277 }' 00:15:15.277 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.277 14:47:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.537 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:15.537 14:47:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.537 14:47:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.537 [2024-12-09 14:47:53.525537] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:15.537 [2024-12-09 14:47:53.525630] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:15.537 [2024-12-09 14:47:53.525730] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:15.537 [2024-12-09 14:47:53.525832] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:15.537 [2024-12-09 14:47:53.525895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:15.537 14:47:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.537 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:15.537 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.537 14:47:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.537 14:47:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.537 14:47:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.537 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:15.537 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:15.537 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:15:15.537 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:15:15.537 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:15:15.537 14:47:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.537 14:47:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.537 14:47:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.538 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:15.538 14:47:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.538 14:47:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.538 [2024-12-09 14:47:53.597414] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:15.538 [2024-12-09 14:47:53.597509] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.538 [2024-12-09 14:47:53.597545] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:15.538 [2024-12-09 14:47:53.597580] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.538 [2024-12-09 14:47:53.599942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.538 [2024-12-09 14:47:53.600010] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:15.538 [2024-12-09 14:47:53.600114] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:15.538 [2024-12-09 14:47:53.600189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:15.538 [2024-12-09 14:47:53.600398] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:15.538 [2024-12-09 14:47:53.600463] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:15.538 [2024-12-09 14:47:53.600504] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:15.538 [2024-12-09 14:47:53.600608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:15.538 pt1 00:15:15.538 14:47:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.538 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:15:15.538 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:15.538 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.538 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:15.538 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:15.538 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.538 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:15.538 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.538 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.538 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.538 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.538 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.538 14:47:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.538 14:47:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.538 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.538 14:47:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.538 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.538 "name": "raid_bdev1", 00:15:15.538 "uuid": "a7a2f851-6a54-4e02-bb6a-46ae71a5ec25", 00:15:15.538 "strip_size_kb": 64, 00:15:15.538 "state": "configuring", 00:15:15.538 "raid_level": "raid5f", 00:15:15.538 "superblock": true, 00:15:15.538 "num_base_bdevs": 3, 00:15:15.538 "num_base_bdevs_discovered": 1, 00:15:15.538 "num_base_bdevs_operational": 2, 00:15:15.538 "base_bdevs_list": [ 00:15:15.538 { 00:15:15.538 "name": null, 00:15:15.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.538 "is_configured": false, 00:15:15.538 "data_offset": 2048, 00:15:15.538 "data_size": 63488 00:15:15.538 }, 00:15:15.538 { 00:15:15.538 "name": "pt2", 00:15:15.538 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:15.538 "is_configured": true, 00:15:15.538 "data_offset": 2048, 00:15:15.538 "data_size": 63488 00:15:15.538 }, 00:15:15.538 { 00:15:15.538 "name": null, 00:15:15.538 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:15.538 "is_configured": false, 00:15:15.538 "data_offset": 2048, 00:15:15.538 "data_size": 63488 00:15:15.538 } 00:15:15.538 ] 00:15:15.538 }' 00:15:15.538 14:47:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.538 14:47:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.124 14:47:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:16.124 14:47:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:16.124 14:47:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.124 14:47:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.124 14:47:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.124 14:47:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:16.124 14:47:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:16.124 14:47:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.124 14:47:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.124 [2024-12-09 14:47:54.112587] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:16.124 [2024-12-09 14:47:54.112710] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.124 [2024-12-09 14:47:54.112753] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:16.124 [2024-12-09 14:47:54.112816] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.124 [2024-12-09 14:47:54.113373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.124 [2024-12-09 14:47:54.113437] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:16.124 [2024-12-09 14:47:54.113564] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:16.124 [2024-12-09 14:47:54.113633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:16.124 [2024-12-09 14:47:54.113794] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:16.124 [2024-12-09 14:47:54.113837] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:16.124 [2024-12-09 14:47:54.114180] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:16.124 [2024-12-09 14:47:54.120098] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:16.124 [2024-12-09 14:47:54.120168] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:16.124 [2024-12-09 14:47:54.120496] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.124 pt3 00:15:16.124 14:47:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.124 14:47:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:16.124 14:47:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:16.124 14:47:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:16.124 14:47:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:16.124 14:47:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.124 14:47:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:16.124 14:47:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.124 14:47:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.124 14:47:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.124 14:47:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.124 14:47:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.124 14:47:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.124 14:47:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.124 14:47:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.124 14:47:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.124 14:47:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.124 "name": "raid_bdev1", 00:15:16.124 "uuid": "a7a2f851-6a54-4e02-bb6a-46ae71a5ec25", 00:15:16.124 "strip_size_kb": 64, 00:15:16.124 "state": "online", 00:15:16.124 "raid_level": "raid5f", 00:15:16.124 "superblock": true, 00:15:16.124 "num_base_bdevs": 3, 00:15:16.124 "num_base_bdevs_discovered": 2, 00:15:16.124 "num_base_bdevs_operational": 2, 00:15:16.124 "base_bdevs_list": [ 00:15:16.124 { 00:15:16.124 "name": null, 00:15:16.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.124 "is_configured": false, 00:15:16.124 "data_offset": 2048, 00:15:16.124 "data_size": 63488 00:15:16.124 }, 00:15:16.124 { 00:15:16.124 "name": "pt2", 00:15:16.124 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:16.124 "is_configured": true, 00:15:16.124 "data_offset": 2048, 00:15:16.124 "data_size": 63488 00:15:16.124 }, 00:15:16.124 { 00:15:16.124 "name": "pt3", 00:15:16.124 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:16.124 "is_configured": true, 00:15:16.124 "data_offset": 2048, 00:15:16.124 "data_size": 63488 00:15:16.124 } 00:15:16.124 ] 00:15:16.124 }' 00:15:16.124 14:47:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.124 14:47:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.716 14:47:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:16.716 14:47:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:16.716 14:47:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.716 14:47:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.716 14:47:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.716 14:47:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:16.716 14:47:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:16.716 14:47:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:16.716 14:47:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.716 14:47:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.716 [2024-12-09 14:47:54.634604] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:16.716 14:47:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.716 14:47:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' a7a2f851-6a54-4e02-bb6a-46ae71a5ec25 '!=' a7a2f851-6a54-4e02-bb6a-46ae71a5ec25 ']' 00:15:16.716 14:47:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 82449 00:15:16.716 14:47:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 82449 ']' 00:15:16.716 14:47:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 82449 00:15:16.716 14:47:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:16.716 14:47:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:16.716 14:47:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82449 00:15:16.716 14:47:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:16.716 killing process with pid 82449 00:15:16.716 14:47:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:16.716 14:47:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82449' 00:15:16.716 14:47:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 82449 00:15:16.716 [2024-12-09 14:47:54.698473] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:16.716 [2024-12-09 14:47:54.698594] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:16.716 [2024-12-09 14:47:54.698666] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:16.716 [2024-12-09 14:47:54.698678] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:16.716 14:47:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 82449 00:15:16.975 [2024-12-09 14:47:54.999867] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:18.353 14:47:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:18.353 00:15:18.353 real 0m7.777s 00:15:18.353 user 0m12.166s 00:15:18.353 sys 0m1.408s 00:15:18.353 14:47:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:18.353 ************************************ 00:15:18.353 END TEST raid5f_superblock_test 00:15:18.353 ************************************ 00:15:18.353 14:47:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.353 14:47:56 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:18.353 14:47:56 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:15:18.353 14:47:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:18.353 14:47:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:18.353 14:47:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:18.353 ************************************ 00:15:18.353 START TEST raid5f_rebuild_test 00:15:18.353 ************************************ 00:15:18.353 14:47:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:15:18.353 14:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:18.353 14:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:18.353 14:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:18.353 14:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:18.353 14:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:18.353 14:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:18.353 14:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:18.353 14:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:18.353 14:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:18.353 14:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:18.353 14:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:18.353 14:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:18.353 14:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:18.353 14:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:18.353 14:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:18.353 14:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:18.353 14:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:18.353 14:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:18.353 14:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:18.353 14:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:18.353 14:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:18.353 14:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:18.353 14:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:18.353 14:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:18.353 14:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:18.353 14:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:18.353 14:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:18.353 14:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:18.353 14:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=82894 00:15:18.353 14:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:18.353 14:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 82894 00:15:18.353 14:47:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 82894 ']' 00:15:18.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.353 14:47:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.353 14:47:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:18.353 14:47:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.353 14:47:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:18.353 14:47:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.353 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:18.353 Zero copy mechanism will not be used. 00:15:18.353 [2024-12-09 14:47:56.285604] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:15:18.353 [2024-12-09 14:47:56.285712] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82894 ] 00:15:18.353 [2024-12-09 14:47:56.459103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.613 [2024-12-09 14:47:56.574113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.872 [2024-12-09 14:47:56.771522] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.872 [2024-12-09 14:47:56.771606] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:19.132 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:19.132 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:19.132 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:19.132 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:19.132 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.132 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.132 BaseBdev1_malloc 00:15:19.132 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.132 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:19.132 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.132 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.132 [2024-12-09 14:47:57.144523] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:19.132 [2024-12-09 14:47:57.144649] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.132 [2024-12-09 14:47:57.144676] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:19.132 [2024-12-09 14:47:57.144687] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.132 [2024-12-09 14:47:57.146694] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.132 [2024-12-09 14:47:57.146733] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:19.132 BaseBdev1 00:15:19.132 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.132 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:19.132 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:19.132 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.132 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.132 BaseBdev2_malloc 00:15:19.132 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.132 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:19.132 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.132 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.132 [2024-12-09 14:47:57.197632] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:19.132 [2024-12-09 14:47:57.197688] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.132 [2024-12-09 14:47:57.197709] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:19.132 [2024-12-09 14:47:57.197719] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.132 [2024-12-09 14:47:57.199721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.132 [2024-12-09 14:47:57.199810] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:19.132 BaseBdev2 00:15:19.132 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.132 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:19.132 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:19.132 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.132 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.392 BaseBdev3_malloc 00:15:19.392 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.392 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:19.392 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.392 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.392 [2024-12-09 14:47:57.260493] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:19.392 [2024-12-09 14:47:57.260608] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.392 [2024-12-09 14:47:57.260633] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:19.392 [2024-12-09 14:47:57.260644] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.392 [2024-12-09 14:47:57.262562] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.392 [2024-12-09 14:47:57.262610] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:19.392 BaseBdev3 00:15:19.392 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.392 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:19.392 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.392 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.392 spare_malloc 00:15:19.392 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.392 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:19.392 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.392 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.392 spare_delay 00:15:19.392 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.392 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:19.392 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.392 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.392 [2024-12-09 14:47:57.325490] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:19.392 [2024-12-09 14:47:57.325542] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.392 [2024-12-09 14:47:57.325558] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:19.392 [2024-12-09 14:47:57.325576] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.392 [2024-12-09 14:47:57.327607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.392 [2024-12-09 14:47:57.327646] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:19.392 spare 00:15:19.392 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.392 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:19.392 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.392 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.392 [2024-12-09 14:47:57.337531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:19.392 [2024-12-09 14:47:57.339245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:19.392 [2024-12-09 14:47:57.339373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:19.392 [2024-12-09 14:47:57.339477] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:19.392 [2024-12-09 14:47:57.339489] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:19.392 [2024-12-09 14:47:57.339744] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:19.392 [2024-12-09 14:47:57.345276] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:19.392 [2024-12-09 14:47:57.345331] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:19.392 [2024-12-09 14:47:57.345555] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.392 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.393 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:19.393 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.393 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.393 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:19.393 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.393 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:19.393 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.393 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.393 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.393 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.393 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.393 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.393 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.393 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.393 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.393 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.393 "name": "raid_bdev1", 00:15:19.393 "uuid": "fe259180-0d59-42f6-ae8a-ac1743fd5fb9", 00:15:19.393 "strip_size_kb": 64, 00:15:19.393 "state": "online", 00:15:19.393 "raid_level": "raid5f", 00:15:19.393 "superblock": false, 00:15:19.393 "num_base_bdevs": 3, 00:15:19.393 "num_base_bdevs_discovered": 3, 00:15:19.393 "num_base_bdevs_operational": 3, 00:15:19.393 "base_bdevs_list": [ 00:15:19.393 { 00:15:19.393 "name": "BaseBdev1", 00:15:19.393 "uuid": "8a1e7513-316a-5871-8dd4-4e04e1493384", 00:15:19.393 "is_configured": true, 00:15:19.393 "data_offset": 0, 00:15:19.393 "data_size": 65536 00:15:19.393 }, 00:15:19.393 { 00:15:19.393 "name": "BaseBdev2", 00:15:19.393 "uuid": "e1620505-885b-500f-8c61-4a5e40577403", 00:15:19.393 "is_configured": true, 00:15:19.393 "data_offset": 0, 00:15:19.393 "data_size": 65536 00:15:19.393 }, 00:15:19.393 { 00:15:19.393 "name": "BaseBdev3", 00:15:19.393 "uuid": "2cb60d78-b434-5e50-98a4-4c2e9a6f554d", 00:15:19.393 "is_configured": true, 00:15:19.393 "data_offset": 0, 00:15:19.393 "data_size": 65536 00:15:19.393 } 00:15:19.393 ] 00:15:19.393 }' 00:15:19.393 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.393 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.961 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:19.961 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:19.961 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.961 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.961 [2024-12-09 14:47:57.803474] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:19.961 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.961 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:15:19.961 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:19.961 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.961 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.961 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.961 14:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.961 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:19.961 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:19.961 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:19.961 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:19.961 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:19.961 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:19.961 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:19.961 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:19.961 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:19.961 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:19.962 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:19.962 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:19.962 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:19.962 14:47:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:19.962 [2024-12-09 14:47:58.042886] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:19.962 /dev/nbd0 00:15:20.221 14:47:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:20.221 14:47:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:20.221 14:47:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:20.221 14:47:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:20.221 14:47:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:20.221 14:47:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:20.221 14:47:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:20.221 14:47:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:20.221 14:47:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:20.221 14:47:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:20.221 14:47:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:20.222 1+0 records in 00:15:20.222 1+0 records out 00:15:20.222 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419429 s, 9.8 MB/s 00:15:20.222 14:47:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.222 14:47:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:20.222 14:47:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.222 14:47:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:20.222 14:47:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:20.222 14:47:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:20.222 14:47:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:20.222 14:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:20.222 14:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:20.222 14:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:20.222 14:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:15:20.481 512+0 records in 00:15:20.482 512+0 records out 00:15:20.482 67108864 bytes (67 MB, 64 MiB) copied, 0.37713 s, 178 MB/s 00:15:20.482 14:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:20.482 14:47:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:20.482 14:47:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:20.482 14:47:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:20.482 14:47:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:20.482 14:47:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:20.482 14:47:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:20.741 14:47:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:20.741 [2024-12-09 14:47:58.702819] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.741 14:47:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:20.741 14:47:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:20.741 14:47:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:20.741 14:47:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:20.741 14:47:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:20.741 14:47:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:20.741 14:47:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:20.741 14:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:20.741 14:47:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.741 14:47:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.741 [2024-12-09 14:47:58.715415] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:20.741 14:47:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.741 14:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:20.741 14:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.741 14:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.741 14:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.741 14:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.741 14:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:20.741 14:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.741 14:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.741 14:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.741 14:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.741 14:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.741 14:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.741 14:47:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.741 14:47:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.741 14:47:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.741 14:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.741 "name": "raid_bdev1", 00:15:20.741 "uuid": "fe259180-0d59-42f6-ae8a-ac1743fd5fb9", 00:15:20.741 "strip_size_kb": 64, 00:15:20.741 "state": "online", 00:15:20.741 "raid_level": "raid5f", 00:15:20.741 "superblock": false, 00:15:20.741 "num_base_bdevs": 3, 00:15:20.741 "num_base_bdevs_discovered": 2, 00:15:20.741 "num_base_bdevs_operational": 2, 00:15:20.741 "base_bdevs_list": [ 00:15:20.741 { 00:15:20.741 "name": null, 00:15:20.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.741 "is_configured": false, 00:15:20.741 "data_offset": 0, 00:15:20.741 "data_size": 65536 00:15:20.741 }, 00:15:20.741 { 00:15:20.741 "name": "BaseBdev2", 00:15:20.741 "uuid": "e1620505-885b-500f-8c61-4a5e40577403", 00:15:20.741 "is_configured": true, 00:15:20.741 "data_offset": 0, 00:15:20.741 "data_size": 65536 00:15:20.741 }, 00:15:20.741 { 00:15:20.741 "name": "BaseBdev3", 00:15:20.741 "uuid": "2cb60d78-b434-5e50-98a4-4c2e9a6f554d", 00:15:20.741 "is_configured": true, 00:15:20.741 "data_offset": 0, 00:15:20.741 "data_size": 65536 00:15:20.741 } 00:15:20.741 ] 00:15:20.741 }' 00:15:20.741 14:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.741 14:47:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.310 14:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:21.310 14:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.310 14:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.310 [2024-12-09 14:47:59.150703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:21.310 [2024-12-09 14:47:59.166861] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:15:21.310 14:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.310 14:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:21.310 [2024-12-09 14:47:59.174078] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:22.247 14:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:22.247 14:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.247 14:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:22.247 14:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:22.247 14:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.247 14:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.247 14:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.247 14:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.247 14:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.247 14:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.247 14:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.247 "name": "raid_bdev1", 00:15:22.247 "uuid": "fe259180-0d59-42f6-ae8a-ac1743fd5fb9", 00:15:22.247 "strip_size_kb": 64, 00:15:22.247 "state": "online", 00:15:22.247 "raid_level": "raid5f", 00:15:22.247 "superblock": false, 00:15:22.247 "num_base_bdevs": 3, 00:15:22.247 "num_base_bdevs_discovered": 3, 00:15:22.247 "num_base_bdevs_operational": 3, 00:15:22.247 "process": { 00:15:22.247 "type": "rebuild", 00:15:22.247 "target": "spare", 00:15:22.247 "progress": { 00:15:22.247 "blocks": 20480, 00:15:22.247 "percent": 15 00:15:22.247 } 00:15:22.247 }, 00:15:22.247 "base_bdevs_list": [ 00:15:22.247 { 00:15:22.247 "name": "spare", 00:15:22.247 "uuid": "58475a8f-da5f-51e9-9c99-9b5d9aec01c7", 00:15:22.247 "is_configured": true, 00:15:22.247 "data_offset": 0, 00:15:22.247 "data_size": 65536 00:15:22.247 }, 00:15:22.247 { 00:15:22.247 "name": "BaseBdev2", 00:15:22.247 "uuid": "e1620505-885b-500f-8c61-4a5e40577403", 00:15:22.247 "is_configured": true, 00:15:22.247 "data_offset": 0, 00:15:22.247 "data_size": 65536 00:15:22.247 }, 00:15:22.247 { 00:15:22.247 "name": "BaseBdev3", 00:15:22.247 "uuid": "2cb60d78-b434-5e50-98a4-4c2e9a6f554d", 00:15:22.247 "is_configured": true, 00:15:22.247 "data_offset": 0, 00:15:22.247 "data_size": 65536 00:15:22.247 } 00:15:22.247 ] 00:15:22.247 }' 00:15:22.247 14:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:22.247 14:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:22.247 14:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.247 14:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:22.247 14:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:22.247 14:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.247 14:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.247 [2024-12-09 14:48:00.333310] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:22.526 [2024-12-09 14:48:00.384406] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:22.526 [2024-12-09 14:48:00.384491] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.526 [2024-12-09 14:48:00.384510] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:22.526 [2024-12-09 14:48:00.384519] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:22.526 14:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.526 14:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:22.526 14:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:22.526 14:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.526 14:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:22.526 14:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.526 14:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:22.526 14:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.526 14:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.526 14:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.526 14:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.526 14:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.526 14:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.526 14:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.526 14:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.526 14:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.526 14:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.526 "name": "raid_bdev1", 00:15:22.526 "uuid": "fe259180-0d59-42f6-ae8a-ac1743fd5fb9", 00:15:22.526 "strip_size_kb": 64, 00:15:22.526 "state": "online", 00:15:22.526 "raid_level": "raid5f", 00:15:22.526 "superblock": false, 00:15:22.526 "num_base_bdevs": 3, 00:15:22.526 "num_base_bdevs_discovered": 2, 00:15:22.526 "num_base_bdevs_operational": 2, 00:15:22.526 "base_bdevs_list": [ 00:15:22.526 { 00:15:22.526 "name": null, 00:15:22.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.527 "is_configured": false, 00:15:22.527 "data_offset": 0, 00:15:22.527 "data_size": 65536 00:15:22.527 }, 00:15:22.527 { 00:15:22.527 "name": "BaseBdev2", 00:15:22.527 "uuid": "e1620505-885b-500f-8c61-4a5e40577403", 00:15:22.527 "is_configured": true, 00:15:22.527 "data_offset": 0, 00:15:22.527 "data_size": 65536 00:15:22.527 }, 00:15:22.527 { 00:15:22.527 "name": "BaseBdev3", 00:15:22.527 "uuid": "2cb60d78-b434-5e50-98a4-4c2e9a6f554d", 00:15:22.527 "is_configured": true, 00:15:22.527 "data_offset": 0, 00:15:22.527 "data_size": 65536 00:15:22.527 } 00:15:22.527 ] 00:15:22.527 }' 00:15:22.527 14:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.527 14:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.786 14:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:22.786 14:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.786 14:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:22.786 14:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:22.786 14:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.786 14:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.786 14:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.786 14:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.786 14:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.044 14:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.044 14:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.044 "name": "raid_bdev1", 00:15:23.044 "uuid": "fe259180-0d59-42f6-ae8a-ac1743fd5fb9", 00:15:23.044 "strip_size_kb": 64, 00:15:23.044 "state": "online", 00:15:23.044 "raid_level": "raid5f", 00:15:23.044 "superblock": false, 00:15:23.044 "num_base_bdevs": 3, 00:15:23.044 "num_base_bdevs_discovered": 2, 00:15:23.044 "num_base_bdevs_operational": 2, 00:15:23.044 "base_bdevs_list": [ 00:15:23.044 { 00:15:23.044 "name": null, 00:15:23.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.044 "is_configured": false, 00:15:23.044 "data_offset": 0, 00:15:23.044 "data_size": 65536 00:15:23.044 }, 00:15:23.044 { 00:15:23.044 "name": "BaseBdev2", 00:15:23.044 "uuid": "e1620505-885b-500f-8c61-4a5e40577403", 00:15:23.044 "is_configured": true, 00:15:23.044 "data_offset": 0, 00:15:23.044 "data_size": 65536 00:15:23.044 }, 00:15:23.044 { 00:15:23.044 "name": "BaseBdev3", 00:15:23.044 "uuid": "2cb60d78-b434-5e50-98a4-4c2e9a6f554d", 00:15:23.044 "is_configured": true, 00:15:23.044 "data_offset": 0, 00:15:23.044 "data_size": 65536 00:15:23.044 } 00:15:23.044 ] 00:15:23.044 }' 00:15:23.044 14:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.044 14:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:23.044 14:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.044 14:48:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:23.044 14:48:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:23.044 14:48:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.044 14:48:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.044 [2024-12-09 14:48:01.021082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:23.044 [2024-12-09 14:48:01.038115] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:15:23.044 14:48:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.044 14:48:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:23.044 [2024-12-09 14:48:01.046271] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:23.980 14:48:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:23.980 14:48:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.980 14:48:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:23.980 14:48:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:23.980 14:48:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.980 14:48:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.980 14:48:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.980 14:48:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.980 14:48:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.980 14:48:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.980 14:48:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.980 "name": "raid_bdev1", 00:15:23.980 "uuid": "fe259180-0d59-42f6-ae8a-ac1743fd5fb9", 00:15:23.980 "strip_size_kb": 64, 00:15:23.980 "state": "online", 00:15:23.980 "raid_level": "raid5f", 00:15:23.980 "superblock": false, 00:15:23.980 "num_base_bdevs": 3, 00:15:23.980 "num_base_bdevs_discovered": 3, 00:15:23.980 "num_base_bdevs_operational": 3, 00:15:23.980 "process": { 00:15:23.980 "type": "rebuild", 00:15:23.980 "target": "spare", 00:15:23.980 "progress": { 00:15:23.980 "blocks": 20480, 00:15:23.980 "percent": 15 00:15:23.980 } 00:15:23.980 }, 00:15:23.980 "base_bdevs_list": [ 00:15:23.980 { 00:15:23.980 "name": "spare", 00:15:23.980 "uuid": "58475a8f-da5f-51e9-9c99-9b5d9aec01c7", 00:15:23.980 "is_configured": true, 00:15:23.980 "data_offset": 0, 00:15:23.980 "data_size": 65536 00:15:23.980 }, 00:15:23.980 { 00:15:23.980 "name": "BaseBdev2", 00:15:23.980 "uuid": "e1620505-885b-500f-8c61-4a5e40577403", 00:15:23.980 "is_configured": true, 00:15:23.980 "data_offset": 0, 00:15:23.980 "data_size": 65536 00:15:23.980 }, 00:15:23.980 { 00:15:23.980 "name": "BaseBdev3", 00:15:23.980 "uuid": "2cb60d78-b434-5e50-98a4-4c2e9a6f554d", 00:15:23.980 "is_configured": true, 00:15:23.980 "data_offset": 0, 00:15:23.980 "data_size": 65536 00:15:23.980 } 00:15:23.980 ] 00:15:23.980 }' 00:15:23.980 14:48:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.238 14:48:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:24.238 14:48:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.238 14:48:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:24.238 14:48:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:24.239 14:48:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:24.239 14:48:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:24.239 14:48:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=554 00:15:24.239 14:48:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:24.239 14:48:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:24.239 14:48:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:24.239 14:48:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:24.239 14:48:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:24.239 14:48:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:24.239 14:48:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.239 14:48:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.239 14:48:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.239 14:48:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.239 14:48:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.239 14:48:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.239 "name": "raid_bdev1", 00:15:24.239 "uuid": "fe259180-0d59-42f6-ae8a-ac1743fd5fb9", 00:15:24.239 "strip_size_kb": 64, 00:15:24.239 "state": "online", 00:15:24.239 "raid_level": "raid5f", 00:15:24.239 "superblock": false, 00:15:24.239 "num_base_bdevs": 3, 00:15:24.239 "num_base_bdevs_discovered": 3, 00:15:24.239 "num_base_bdevs_operational": 3, 00:15:24.239 "process": { 00:15:24.239 "type": "rebuild", 00:15:24.239 "target": "spare", 00:15:24.239 "progress": { 00:15:24.239 "blocks": 22528, 00:15:24.239 "percent": 17 00:15:24.239 } 00:15:24.239 }, 00:15:24.239 "base_bdevs_list": [ 00:15:24.239 { 00:15:24.239 "name": "spare", 00:15:24.239 "uuid": "58475a8f-da5f-51e9-9c99-9b5d9aec01c7", 00:15:24.239 "is_configured": true, 00:15:24.239 "data_offset": 0, 00:15:24.239 "data_size": 65536 00:15:24.239 }, 00:15:24.239 { 00:15:24.239 "name": "BaseBdev2", 00:15:24.239 "uuid": "e1620505-885b-500f-8c61-4a5e40577403", 00:15:24.239 "is_configured": true, 00:15:24.239 "data_offset": 0, 00:15:24.239 "data_size": 65536 00:15:24.239 }, 00:15:24.239 { 00:15:24.239 "name": "BaseBdev3", 00:15:24.239 "uuid": "2cb60d78-b434-5e50-98a4-4c2e9a6f554d", 00:15:24.239 "is_configured": true, 00:15:24.239 "data_offset": 0, 00:15:24.239 "data_size": 65536 00:15:24.239 } 00:15:24.239 ] 00:15:24.239 }' 00:15:24.239 14:48:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.239 14:48:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:24.239 14:48:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.239 14:48:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:24.239 14:48:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:25.618 14:48:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:25.618 14:48:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.618 14:48:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.618 14:48:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.618 14:48:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.618 14:48:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.618 14:48:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.618 14:48:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.618 14:48:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.618 14:48:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.618 14:48:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.618 14:48:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.618 "name": "raid_bdev1", 00:15:25.618 "uuid": "fe259180-0d59-42f6-ae8a-ac1743fd5fb9", 00:15:25.618 "strip_size_kb": 64, 00:15:25.618 "state": "online", 00:15:25.618 "raid_level": "raid5f", 00:15:25.618 "superblock": false, 00:15:25.618 "num_base_bdevs": 3, 00:15:25.618 "num_base_bdevs_discovered": 3, 00:15:25.618 "num_base_bdevs_operational": 3, 00:15:25.618 "process": { 00:15:25.618 "type": "rebuild", 00:15:25.618 "target": "spare", 00:15:25.618 "progress": { 00:15:25.618 "blocks": 45056, 00:15:25.618 "percent": 34 00:15:25.618 } 00:15:25.618 }, 00:15:25.618 "base_bdevs_list": [ 00:15:25.618 { 00:15:25.618 "name": "spare", 00:15:25.618 "uuid": "58475a8f-da5f-51e9-9c99-9b5d9aec01c7", 00:15:25.618 "is_configured": true, 00:15:25.618 "data_offset": 0, 00:15:25.618 "data_size": 65536 00:15:25.618 }, 00:15:25.618 { 00:15:25.618 "name": "BaseBdev2", 00:15:25.618 "uuid": "e1620505-885b-500f-8c61-4a5e40577403", 00:15:25.618 "is_configured": true, 00:15:25.618 "data_offset": 0, 00:15:25.618 "data_size": 65536 00:15:25.618 }, 00:15:25.618 { 00:15:25.618 "name": "BaseBdev3", 00:15:25.618 "uuid": "2cb60d78-b434-5e50-98a4-4c2e9a6f554d", 00:15:25.618 "is_configured": true, 00:15:25.618 "data_offset": 0, 00:15:25.618 "data_size": 65536 00:15:25.618 } 00:15:25.618 ] 00:15:25.618 }' 00:15:25.618 14:48:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.618 14:48:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:25.618 14:48:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.618 14:48:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:25.618 14:48:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:26.561 14:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:26.561 14:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:26.561 14:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.561 14:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:26.561 14:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:26.561 14:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.561 14:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.561 14:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.561 14:48:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.561 14:48:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.561 14:48:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.561 14:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.561 "name": "raid_bdev1", 00:15:26.561 "uuid": "fe259180-0d59-42f6-ae8a-ac1743fd5fb9", 00:15:26.561 "strip_size_kb": 64, 00:15:26.561 "state": "online", 00:15:26.561 "raid_level": "raid5f", 00:15:26.561 "superblock": false, 00:15:26.561 "num_base_bdevs": 3, 00:15:26.561 "num_base_bdevs_discovered": 3, 00:15:26.561 "num_base_bdevs_operational": 3, 00:15:26.561 "process": { 00:15:26.561 "type": "rebuild", 00:15:26.561 "target": "spare", 00:15:26.561 "progress": { 00:15:26.561 "blocks": 67584, 00:15:26.561 "percent": 51 00:15:26.561 } 00:15:26.561 }, 00:15:26.561 "base_bdevs_list": [ 00:15:26.561 { 00:15:26.561 "name": "spare", 00:15:26.561 "uuid": "58475a8f-da5f-51e9-9c99-9b5d9aec01c7", 00:15:26.561 "is_configured": true, 00:15:26.561 "data_offset": 0, 00:15:26.561 "data_size": 65536 00:15:26.561 }, 00:15:26.561 { 00:15:26.561 "name": "BaseBdev2", 00:15:26.561 "uuid": "e1620505-885b-500f-8c61-4a5e40577403", 00:15:26.561 "is_configured": true, 00:15:26.561 "data_offset": 0, 00:15:26.561 "data_size": 65536 00:15:26.561 }, 00:15:26.561 { 00:15:26.561 "name": "BaseBdev3", 00:15:26.561 "uuid": "2cb60d78-b434-5e50-98a4-4c2e9a6f554d", 00:15:26.561 "is_configured": true, 00:15:26.561 "data_offset": 0, 00:15:26.561 "data_size": 65536 00:15:26.561 } 00:15:26.561 ] 00:15:26.561 }' 00:15:26.561 14:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.561 14:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:26.561 14:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.561 14:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:26.561 14:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:27.508 14:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:27.508 14:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:27.508 14:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.508 14:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:27.508 14:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:27.508 14:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.508 14:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.508 14:48:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.508 14:48:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.508 14:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.508 14:48:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.769 14:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.769 "name": "raid_bdev1", 00:15:27.769 "uuid": "fe259180-0d59-42f6-ae8a-ac1743fd5fb9", 00:15:27.769 "strip_size_kb": 64, 00:15:27.769 "state": "online", 00:15:27.769 "raid_level": "raid5f", 00:15:27.769 "superblock": false, 00:15:27.769 "num_base_bdevs": 3, 00:15:27.769 "num_base_bdevs_discovered": 3, 00:15:27.769 "num_base_bdevs_operational": 3, 00:15:27.769 "process": { 00:15:27.769 "type": "rebuild", 00:15:27.769 "target": "spare", 00:15:27.769 "progress": { 00:15:27.769 "blocks": 92160, 00:15:27.769 "percent": 70 00:15:27.769 } 00:15:27.769 }, 00:15:27.769 "base_bdevs_list": [ 00:15:27.769 { 00:15:27.769 "name": "spare", 00:15:27.769 "uuid": "58475a8f-da5f-51e9-9c99-9b5d9aec01c7", 00:15:27.769 "is_configured": true, 00:15:27.769 "data_offset": 0, 00:15:27.769 "data_size": 65536 00:15:27.769 }, 00:15:27.769 { 00:15:27.769 "name": "BaseBdev2", 00:15:27.769 "uuid": "e1620505-885b-500f-8c61-4a5e40577403", 00:15:27.769 "is_configured": true, 00:15:27.769 "data_offset": 0, 00:15:27.769 "data_size": 65536 00:15:27.769 }, 00:15:27.769 { 00:15:27.769 "name": "BaseBdev3", 00:15:27.769 "uuid": "2cb60d78-b434-5e50-98a4-4c2e9a6f554d", 00:15:27.769 "is_configured": true, 00:15:27.769 "data_offset": 0, 00:15:27.769 "data_size": 65536 00:15:27.769 } 00:15:27.769 ] 00:15:27.769 }' 00:15:27.769 14:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.769 14:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:27.769 14:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.769 14:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:27.769 14:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:28.704 14:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:28.704 14:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:28.704 14:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:28.704 14:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:28.704 14:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:28.704 14:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:28.704 14:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.704 14:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.704 14:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.704 14:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.704 14:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.704 14:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.704 "name": "raid_bdev1", 00:15:28.704 "uuid": "fe259180-0d59-42f6-ae8a-ac1743fd5fb9", 00:15:28.704 "strip_size_kb": 64, 00:15:28.704 "state": "online", 00:15:28.704 "raid_level": "raid5f", 00:15:28.704 "superblock": false, 00:15:28.704 "num_base_bdevs": 3, 00:15:28.704 "num_base_bdevs_discovered": 3, 00:15:28.704 "num_base_bdevs_operational": 3, 00:15:28.704 "process": { 00:15:28.704 "type": "rebuild", 00:15:28.704 "target": "spare", 00:15:28.704 "progress": { 00:15:28.704 "blocks": 114688, 00:15:28.704 "percent": 87 00:15:28.704 } 00:15:28.704 }, 00:15:28.704 "base_bdevs_list": [ 00:15:28.704 { 00:15:28.704 "name": "spare", 00:15:28.704 "uuid": "58475a8f-da5f-51e9-9c99-9b5d9aec01c7", 00:15:28.704 "is_configured": true, 00:15:28.704 "data_offset": 0, 00:15:28.704 "data_size": 65536 00:15:28.704 }, 00:15:28.704 { 00:15:28.704 "name": "BaseBdev2", 00:15:28.704 "uuid": "e1620505-885b-500f-8c61-4a5e40577403", 00:15:28.704 "is_configured": true, 00:15:28.704 "data_offset": 0, 00:15:28.704 "data_size": 65536 00:15:28.704 }, 00:15:28.704 { 00:15:28.704 "name": "BaseBdev3", 00:15:28.704 "uuid": "2cb60d78-b434-5e50-98a4-4c2e9a6f554d", 00:15:28.704 "is_configured": true, 00:15:28.704 "data_offset": 0, 00:15:28.704 "data_size": 65536 00:15:28.704 } 00:15:28.704 ] 00:15:28.704 }' 00:15:28.704 14:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:28.963 14:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:28.963 14:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.963 14:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:28.963 14:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:29.531 [2024-12-09 14:48:07.501183] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:29.531 [2024-12-09 14:48:07.501283] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:29.531 [2024-12-09 14:48:07.501323] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:29.789 14:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:29.789 14:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:29.789 14:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.789 14:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:29.789 14:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:29.789 14:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.789 14:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.789 14:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.789 14:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.789 14:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.048 14:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.048 14:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.048 "name": "raid_bdev1", 00:15:30.048 "uuid": "fe259180-0d59-42f6-ae8a-ac1743fd5fb9", 00:15:30.048 "strip_size_kb": 64, 00:15:30.048 "state": "online", 00:15:30.048 "raid_level": "raid5f", 00:15:30.048 "superblock": false, 00:15:30.048 "num_base_bdevs": 3, 00:15:30.048 "num_base_bdevs_discovered": 3, 00:15:30.048 "num_base_bdevs_operational": 3, 00:15:30.048 "base_bdevs_list": [ 00:15:30.048 { 00:15:30.048 "name": "spare", 00:15:30.048 "uuid": "58475a8f-da5f-51e9-9c99-9b5d9aec01c7", 00:15:30.048 "is_configured": true, 00:15:30.048 "data_offset": 0, 00:15:30.048 "data_size": 65536 00:15:30.048 }, 00:15:30.048 { 00:15:30.048 "name": "BaseBdev2", 00:15:30.048 "uuid": "e1620505-885b-500f-8c61-4a5e40577403", 00:15:30.048 "is_configured": true, 00:15:30.048 "data_offset": 0, 00:15:30.048 "data_size": 65536 00:15:30.048 }, 00:15:30.048 { 00:15:30.048 "name": "BaseBdev3", 00:15:30.048 "uuid": "2cb60d78-b434-5e50-98a4-4c2e9a6f554d", 00:15:30.048 "is_configured": true, 00:15:30.048 "data_offset": 0, 00:15:30.048 "data_size": 65536 00:15:30.048 } 00:15:30.048 ] 00:15:30.048 }' 00:15:30.048 14:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.048 14:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:30.048 14:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.048 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:30.048 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:30.048 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:30.048 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.049 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:30.049 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:30.049 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.049 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.049 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.049 14:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.049 14:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.049 14:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.049 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.049 "name": "raid_bdev1", 00:15:30.049 "uuid": "fe259180-0d59-42f6-ae8a-ac1743fd5fb9", 00:15:30.049 "strip_size_kb": 64, 00:15:30.049 "state": "online", 00:15:30.049 "raid_level": "raid5f", 00:15:30.049 "superblock": false, 00:15:30.049 "num_base_bdevs": 3, 00:15:30.049 "num_base_bdevs_discovered": 3, 00:15:30.049 "num_base_bdevs_operational": 3, 00:15:30.049 "base_bdevs_list": [ 00:15:30.049 { 00:15:30.049 "name": "spare", 00:15:30.049 "uuid": "58475a8f-da5f-51e9-9c99-9b5d9aec01c7", 00:15:30.049 "is_configured": true, 00:15:30.049 "data_offset": 0, 00:15:30.049 "data_size": 65536 00:15:30.049 }, 00:15:30.049 { 00:15:30.049 "name": "BaseBdev2", 00:15:30.049 "uuid": "e1620505-885b-500f-8c61-4a5e40577403", 00:15:30.049 "is_configured": true, 00:15:30.049 "data_offset": 0, 00:15:30.049 "data_size": 65536 00:15:30.049 }, 00:15:30.049 { 00:15:30.049 "name": "BaseBdev3", 00:15:30.049 "uuid": "2cb60d78-b434-5e50-98a4-4c2e9a6f554d", 00:15:30.049 "is_configured": true, 00:15:30.049 "data_offset": 0, 00:15:30.049 "data_size": 65536 00:15:30.049 } 00:15:30.049 ] 00:15:30.049 }' 00:15:30.049 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.049 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:30.049 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.307 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:30.307 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:30.307 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:30.307 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.307 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.307 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.307 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:30.307 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.307 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.307 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.307 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.307 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.307 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.307 14:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.307 14:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.307 14:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.307 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.307 "name": "raid_bdev1", 00:15:30.307 "uuid": "fe259180-0d59-42f6-ae8a-ac1743fd5fb9", 00:15:30.307 "strip_size_kb": 64, 00:15:30.307 "state": "online", 00:15:30.307 "raid_level": "raid5f", 00:15:30.307 "superblock": false, 00:15:30.307 "num_base_bdevs": 3, 00:15:30.307 "num_base_bdevs_discovered": 3, 00:15:30.307 "num_base_bdevs_operational": 3, 00:15:30.307 "base_bdevs_list": [ 00:15:30.307 { 00:15:30.307 "name": "spare", 00:15:30.307 "uuid": "58475a8f-da5f-51e9-9c99-9b5d9aec01c7", 00:15:30.307 "is_configured": true, 00:15:30.307 "data_offset": 0, 00:15:30.307 "data_size": 65536 00:15:30.307 }, 00:15:30.307 { 00:15:30.307 "name": "BaseBdev2", 00:15:30.307 "uuid": "e1620505-885b-500f-8c61-4a5e40577403", 00:15:30.307 "is_configured": true, 00:15:30.307 "data_offset": 0, 00:15:30.307 "data_size": 65536 00:15:30.307 }, 00:15:30.307 { 00:15:30.307 "name": "BaseBdev3", 00:15:30.307 "uuid": "2cb60d78-b434-5e50-98a4-4c2e9a6f554d", 00:15:30.307 "is_configured": true, 00:15:30.307 "data_offset": 0, 00:15:30.307 "data_size": 65536 00:15:30.307 } 00:15:30.307 ] 00:15:30.307 }' 00:15:30.307 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.307 14:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.565 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:30.565 14:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.566 14:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.566 [2024-12-09 14:48:08.593149] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:30.566 [2024-12-09 14:48:08.593184] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:30.566 [2024-12-09 14:48:08.593277] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:30.566 [2024-12-09 14:48:08.593361] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:30.566 [2024-12-09 14:48:08.593376] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:30.566 14:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.566 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.566 14:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.566 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:30.566 14:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.566 14:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.566 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:30.566 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:30.566 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:30.566 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:30.566 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:30.566 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:30.566 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:30.566 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:30.566 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:30.566 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:30.566 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:30.566 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:30.566 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:30.825 /dev/nbd0 00:15:30.825 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:30.825 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:30.825 14:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:30.825 14:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:30.825 14:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:30.825 14:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:30.825 14:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:30.825 14:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:30.825 14:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:30.825 14:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:30.825 14:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:30.825 1+0 records in 00:15:30.825 1+0 records out 00:15:30.825 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000455921 s, 9.0 MB/s 00:15:30.825 14:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:30.825 14:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:30.825 14:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:30.825 14:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:30.825 14:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:30.825 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:30.825 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:30.825 14:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:31.085 /dev/nbd1 00:15:31.085 14:48:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:31.085 14:48:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:31.085 14:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:31.085 14:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:31.085 14:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:31.085 14:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:31.085 14:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:31.085 14:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:31.085 14:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:31.085 14:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:31.085 14:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:31.085 1+0 records in 00:15:31.085 1+0 records out 00:15:31.085 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408244 s, 10.0 MB/s 00:15:31.085 14:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:31.085 14:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:31.085 14:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:31.085 14:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:31.085 14:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:31.085 14:48:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:31.085 14:48:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:31.085 14:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:31.390 14:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:31.390 14:48:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:31.390 14:48:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:31.390 14:48:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:31.390 14:48:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:31.390 14:48:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:31.390 14:48:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:31.662 14:48:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:31.662 14:48:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:31.662 14:48:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:31.662 14:48:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:31.662 14:48:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:31.662 14:48:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:31.662 14:48:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:31.662 14:48:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:31.662 14:48:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:31.662 14:48:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:31.921 14:48:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:31.921 14:48:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:31.921 14:48:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:31.921 14:48:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:31.921 14:48:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:31.921 14:48:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:31.921 14:48:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:31.921 14:48:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:31.921 14:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:31.921 14:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 82894 00:15:31.921 14:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 82894 ']' 00:15:31.921 14:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 82894 00:15:31.921 14:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:31.921 14:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:31.921 14:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82894 00:15:31.921 14:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:31.921 14:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:31.921 killing process with pid 82894 00:15:31.921 14:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82894' 00:15:31.921 14:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 82894 00:15:31.921 Received shutdown signal, test time was about 60.000000 seconds 00:15:31.921 00:15:31.921 Latency(us) 00:15:31.921 [2024-12-09T14:48:10.043Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.921 [2024-12-09T14:48:10.043Z] =================================================================================================================== 00:15:31.921 [2024-12-09T14:48:10.043Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:31.921 [2024-12-09 14:48:09.848501] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:31.921 14:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 82894 00:15:32.180 [2024-12-09 14:48:10.257951] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:33.555 14:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:33.555 00:15:33.555 real 0m15.157s 00:15:33.555 user 0m18.587s 00:15:33.555 sys 0m2.038s 00:15:33.556 14:48:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:33.556 14:48:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.556 ************************************ 00:15:33.556 END TEST raid5f_rebuild_test 00:15:33.556 ************************************ 00:15:33.556 14:48:11 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:15:33.556 14:48:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:33.556 14:48:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:33.556 14:48:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:33.556 ************************************ 00:15:33.556 START TEST raid5f_rebuild_test_sb 00:15:33.556 ************************************ 00:15:33.556 14:48:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:15:33.556 14:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:33.556 14:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:33.556 14:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:33.556 14:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:33.556 14:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:33.556 14:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:33.556 14:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:33.556 14:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:33.556 14:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:33.556 14:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:33.556 14:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:33.556 14:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:33.556 14:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:33.556 14:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:33.556 14:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:33.556 14:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:33.556 14:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:33.556 14:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:33.556 14:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:33.556 14:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:33.556 14:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:33.556 14:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:33.556 14:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:33.556 14:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:33.556 14:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:33.556 14:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:33.556 14:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:33.556 14:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:33.556 14:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:33.556 14:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=83334 00:15:33.556 14:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:33.556 14:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 83334 00:15:33.556 14:48:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83334 ']' 00:15:33.556 14:48:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.556 14:48:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:33.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.556 14:48:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.556 14:48:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:33.556 14:48:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.556 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:33.556 Zero copy mechanism will not be used. 00:15:33.556 [2024-12-09 14:48:11.512557] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:15:33.556 [2024-12-09 14:48:11.512683] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83334 ] 00:15:33.814 [2024-12-09 14:48:11.686646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.814 [2024-12-09 14:48:11.803035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.072 [2024-12-09 14:48:12.003421] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:34.072 [2024-12-09 14:48:12.003487] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:34.330 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:34.330 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:34.330 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:34.330 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:34.330 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.330 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.330 BaseBdev1_malloc 00:15:34.330 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.330 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:34.330 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.330 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.330 [2024-12-09 14:48:12.389846] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:34.330 [2024-12-09 14:48:12.389906] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.330 [2024-12-09 14:48:12.389930] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:34.330 [2024-12-09 14:48:12.389942] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.330 [2024-12-09 14:48:12.392123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.330 [2024-12-09 14:48:12.392168] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:34.330 BaseBdev1 00:15:34.330 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.330 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:34.330 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:34.330 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.330 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.330 BaseBdev2_malloc 00:15:34.330 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.330 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:34.330 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.330 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.331 [2024-12-09 14:48:12.444216] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:34.331 [2024-12-09 14:48:12.444277] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.331 [2024-12-09 14:48:12.444301] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:34.331 [2024-12-09 14:48:12.444313] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.331 [2024-12-09 14:48:12.446341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.331 [2024-12-09 14:48:12.446376] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:34.331 BaseBdev2 00:15:34.331 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.331 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:34.331 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:34.331 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.331 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.589 BaseBdev3_malloc 00:15:34.589 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.589 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:34.589 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.589 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.589 [2024-12-09 14:48:12.509169] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:34.589 [2024-12-09 14:48:12.509222] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.589 [2024-12-09 14:48:12.509244] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:34.589 [2024-12-09 14:48:12.509254] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.589 [2024-12-09 14:48:12.511293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.589 [2024-12-09 14:48:12.511335] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:34.589 BaseBdev3 00:15:34.589 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.589 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:34.589 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.589 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.589 spare_malloc 00:15:34.589 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.589 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:34.589 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.589 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.589 spare_delay 00:15:34.589 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.589 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:34.589 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.589 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.589 [2024-12-09 14:48:12.573635] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:34.589 [2024-12-09 14:48:12.573686] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.589 [2024-12-09 14:48:12.573703] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:34.589 [2024-12-09 14:48:12.573713] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.589 [2024-12-09 14:48:12.575747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.589 [2024-12-09 14:48:12.575790] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:34.589 spare 00:15:34.589 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.589 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:34.589 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.589 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.589 [2024-12-09 14:48:12.585688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:34.589 [2024-12-09 14:48:12.587425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:34.590 [2024-12-09 14:48:12.587493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:34.590 [2024-12-09 14:48:12.587679] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:34.590 [2024-12-09 14:48:12.587696] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:34.590 [2024-12-09 14:48:12.587936] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:34.590 [2024-12-09 14:48:12.593435] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:34.590 [2024-12-09 14:48:12.593480] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:34.590 [2024-12-09 14:48:12.593685] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.590 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.590 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:34.590 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.590 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.590 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.590 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.590 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:34.590 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.590 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.590 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.590 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.590 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.590 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.590 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.590 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.590 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.590 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.590 "name": "raid_bdev1", 00:15:34.590 "uuid": "50b15710-c6fc-43f8-bed4-cf688ce2843a", 00:15:34.590 "strip_size_kb": 64, 00:15:34.590 "state": "online", 00:15:34.590 "raid_level": "raid5f", 00:15:34.590 "superblock": true, 00:15:34.590 "num_base_bdevs": 3, 00:15:34.590 "num_base_bdevs_discovered": 3, 00:15:34.590 "num_base_bdevs_operational": 3, 00:15:34.590 "base_bdevs_list": [ 00:15:34.590 { 00:15:34.590 "name": "BaseBdev1", 00:15:34.590 "uuid": "359eeafc-d170-53b8-9c8c-d97dc145f882", 00:15:34.590 "is_configured": true, 00:15:34.590 "data_offset": 2048, 00:15:34.590 "data_size": 63488 00:15:34.590 }, 00:15:34.590 { 00:15:34.590 "name": "BaseBdev2", 00:15:34.590 "uuid": "3369d3de-77c7-5d03-82fa-f547ba09b7b8", 00:15:34.590 "is_configured": true, 00:15:34.590 "data_offset": 2048, 00:15:34.590 "data_size": 63488 00:15:34.590 }, 00:15:34.590 { 00:15:34.590 "name": "BaseBdev3", 00:15:34.590 "uuid": "1b5137bf-f6f9-5d64-9f6b-b53a11efca65", 00:15:34.590 "is_configured": true, 00:15:34.590 "data_offset": 2048, 00:15:34.590 "data_size": 63488 00:15:34.590 } 00:15:34.590 ] 00:15:34.590 }' 00:15:34.590 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.590 14:48:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.156 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:35.156 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:35.156 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.156 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.156 [2024-12-09 14:48:13.043456] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:35.157 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.157 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:15:35.157 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:35.157 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.157 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.157 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.157 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.157 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:35.157 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:35.157 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:35.157 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:35.157 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:35.157 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:35.157 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:35.157 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:35.157 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:35.157 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:35.157 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:35.157 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:35.157 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:35.157 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:35.416 [2024-12-09 14:48:13.318794] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:35.416 /dev/nbd0 00:15:35.416 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:35.416 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:35.416 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:35.416 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:35.416 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:35.416 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:35.416 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:35.416 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:35.416 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:35.416 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:35.416 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:35.416 1+0 records in 00:15:35.416 1+0 records out 00:15:35.416 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424144 s, 9.7 MB/s 00:15:35.416 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:35.416 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:35.416 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:35.416 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:35.416 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:35.416 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:35.416 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:35.416 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:35.416 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:35.416 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:35.416 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:15:35.674 496+0 records in 00:15:35.674 496+0 records out 00:15:35.674 65011712 bytes (65 MB, 62 MiB) copied, 0.364056 s, 179 MB/s 00:15:35.674 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:35.674 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:35.674 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:35.674 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:35.674 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:35.674 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:35.674 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:35.933 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:35.933 [2024-12-09 14:48:13.988971] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:35.933 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:35.933 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:35.933 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:35.933 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:35.933 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:35.933 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:35.933 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:35.933 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:35.933 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.933 14:48:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.933 [2024-12-09 14:48:14.005183] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:35.933 14:48:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.933 14:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:35.933 14:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.933 14:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.933 14:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.933 14:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.933 14:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:35.933 14:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.933 14:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.933 14:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.933 14:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.933 14:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.933 14:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.933 14:48:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.933 14:48:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.933 14:48:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.193 14:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.193 "name": "raid_bdev1", 00:15:36.193 "uuid": "50b15710-c6fc-43f8-bed4-cf688ce2843a", 00:15:36.193 "strip_size_kb": 64, 00:15:36.193 "state": "online", 00:15:36.193 "raid_level": "raid5f", 00:15:36.193 "superblock": true, 00:15:36.193 "num_base_bdevs": 3, 00:15:36.193 "num_base_bdevs_discovered": 2, 00:15:36.193 "num_base_bdevs_operational": 2, 00:15:36.193 "base_bdevs_list": [ 00:15:36.193 { 00:15:36.193 "name": null, 00:15:36.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.193 "is_configured": false, 00:15:36.193 "data_offset": 0, 00:15:36.193 "data_size": 63488 00:15:36.193 }, 00:15:36.193 { 00:15:36.193 "name": "BaseBdev2", 00:15:36.193 "uuid": "3369d3de-77c7-5d03-82fa-f547ba09b7b8", 00:15:36.193 "is_configured": true, 00:15:36.193 "data_offset": 2048, 00:15:36.193 "data_size": 63488 00:15:36.193 }, 00:15:36.193 { 00:15:36.193 "name": "BaseBdev3", 00:15:36.193 "uuid": "1b5137bf-f6f9-5d64-9f6b-b53a11efca65", 00:15:36.193 "is_configured": true, 00:15:36.193 "data_offset": 2048, 00:15:36.193 "data_size": 63488 00:15:36.193 } 00:15:36.193 ] 00:15:36.193 }' 00:15:36.193 14:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.193 14:48:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.453 14:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:36.453 14:48:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.453 14:48:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.453 [2024-12-09 14:48:14.488468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:36.453 [2024-12-09 14:48:14.505207] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:15:36.453 14:48:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.453 14:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:36.453 [2024-12-09 14:48:14.513153] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:37.392 14:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:37.392 14:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.392 14:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:37.392 14:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:37.392 14:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.684 14:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.684 14:48:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.684 14:48:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.684 14:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.684 14:48:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.684 14:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.684 "name": "raid_bdev1", 00:15:37.684 "uuid": "50b15710-c6fc-43f8-bed4-cf688ce2843a", 00:15:37.684 "strip_size_kb": 64, 00:15:37.684 "state": "online", 00:15:37.684 "raid_level": "raid5f", 00:15:37.684 "superblock": true, 00:15:37.684 "num_base_bdevs": 3, 00:15:37.684 "num_base_bdevs_discovered": 3, 00:15:37.684 "num_base_bdevs_operational": 3, 00:15:37.684 "process": { 00:15:37.685 "type": "rebuild", 00:15:37.685 "target": "spare", 00:15:37.685 "progress": { 00:15:37.685 "blocks": 20480, 00:15:37.685 "percent": 16 00:15:37.685 } 00:15:37.685 }, 00:15:37.685 "base_bdevs_list": [ 00:15:37.685 { 00:15:37.685 "name": "spare", 00:15:37.685 "uuid": "d826ba30-102e-5026-aafa-8fc5c1bfa457", 00:15:37.685 "is_configured": true, 00:15:37.685 "data_offset": 2048, 00:15:37.685 "data_size": 63488 00:15:37.685 }, 00:15:37.685 { 00:15:37.685 "name": "BaseBdev2", 00:15:37.685 "uuid": "3369d3de-77c7-5d03-82fa-f547ba09b7b8", 00:15:37.685 "is_configured": true, 00:15:37.685 "data_offset": 2048, 00:15:37.685 "data_size": 63488 00:15:37.685 }, 00:15:37.685 { 00:15:37.685 "name": "BaseBdev3", 00:15:37.685 "uuid": "1b5137bf-f6f9-5d64-9f6b-b53a11efca65", 00:15:37.685 "is_configured": true, 00:15:37.685 "data_offset": 2048, 00:15:37.685 "data_size": 63488 00:15:37.685 } 00:15:37.685 ] 00:15:37.685 }' 00:15:37.685 14:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.685 14:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:37.685 14:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.685 14:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:37.685 14:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:37.685 14:48:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.685 14:48:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.685 [2024-12-09 14:48:15.656275] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:37.685 [2024-12-09 14:48:15.722946] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:37.685 [2024-12-09 14:48:15.723026] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.685 [2024-12-09 14:48:15.723045] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:37.685 [2024-12-09 14:48:15.723053] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:37.685 14:48:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.685 14:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:37.685 14:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.685 14:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.685 14:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.685 14:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.685 14:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:37.685 14:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.685 14:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.685 14:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.685 14:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.685 14:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.685 14:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.685 14:48:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.685 14:48:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.945 14:48:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.945 14:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.945 "name": "raid_bdev1", 00:15:37.945 "uuid": "50b15710-c6fc-43f8-bed4-cf688ce2843a", 00:15:37.945 "strip_size_kb": 64, 00:15:37.945 "state": "online", 00:15:37.945 "raid_level": "raid5f", 00:15:37.945 "superblock": true, 00:15:37.945 "num_base_bdevs": 3, 00:15:37.945 "num_base_bdevs_discovered": 2, 00:15:37.945 "num_base_bdevs_operational": 2, 00:15:37.945 "base_bdevs_list": [ 00:15:37.945 { 00:15:37.945 "name": null, 00:15:37.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.945 "is_configured": false, 00:15:37.945 "data_offset": 0, 00:15:37.945 "data_size": 63488 00:15:37.945 }, 00:15:37.945 { 00:15:37.945 "name": "BaseBdev2", 00:15:37.945 "uuid": "3369d3de-77c7-5d03-82fa-f547ba09b7b8", 00:15:37.945 "is_configured": true, 00:15:37.945 "data_offset": 2048, 00:15:37.945 "data_size": 63488 00:15:37.945 }, 00:15:37.945 { 00:15:37.945 "name": "BaseBdev3", 00:15:37.945 "uuid": "1b5137bf-f6f9-5d64-9f6b-b53a11efca65", 00:15:37.945 "is_configured": true, 00:15:37.945 "data_offset": 2048, 00:15:37.945 "data_size": 63488 00:15:37.945 } 00:15:37.945 ] 00:15:37.945 }' 00:15:37.945 14:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.945 14:48:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.204 14:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:38.204 14:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.204 14:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:38.204 14:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:38.204 14:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.204 14:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.205 14:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.205 14:48:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.205 14:48:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.205 14:48:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.205 14:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.205 "name": "raid_bdev1", 00:15:38.205 "uuid": "50b15710-c6fc-43f8-bed4-cf688ce2843a", 00:15:38.205 "strip_size_kb": 64, 00:15:38.205 "state": "online", 00:15:38.205 "raid_level": "raid5f", 00:15:38.205 "superblock": true, 00:15:38.205 "num_base_bdevs": 3, 00:15:38.205 "num_base_bdevs_discovered": 2, 00:15:38.205 "num_base_bdevs_operational": 2, 00:15:38.205 "base_bdevs_list": [ 00:15:38.205 { 00:15:38.205 "name": null, 00:15:38.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.205 "is_configured": false, 00:15:38.205 "data_offset": 0, 00:15:38.205 "data_size": 63488 00:15:38.205 }, 00:15:38.205 { 00:15:38.205 "name": "BaseBdev2", 00:15:38.205 "uuid": "3369d3de-77c7-5d03-82fa-f547ba09b7b8", 00:15:38.205 "is_configured": true, 00:15:38.205 "data_offset": 2048, 00:15:38.205 "data_size": 63488 00:15:38.205 }, 00:15:38.205 { 00:15:38.205 "name": "BaseBdev3", 00:15:38.205 "uuid": "1b5137bf-f6f9-5d64-9f6b-b53a11efca65", 00:15:38.205 "is_configured": true, 00:15:38.205 "data_offset": 2048, 00:15:38.205 "data_size": 63488 00:15:38.205 } 00:15:38.205 ] 00:15:38.205 }' 00:15:38.205 14:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.205 14:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:38.205 14:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.205 14:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:38.205 14:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:38.205 14:48:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.205 14:48:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.205 [2024-12-09 14:48:16.315074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:38.464 [2024-12-09 14:48:16.334282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:15:38.464 14:48:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.464 14:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:38.464 [2024-12-09 14:48:16.342878] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:39.402 14:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:39.402 14:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.402 14:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:39.403 14:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:39.403 14:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.403 14:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.403 14:48:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.403 14:48:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.403 14:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.403 14:48:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.403 14:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.403 "name": "raid_bdev1", 00:15:39.403 "uuid": "50b15710-c6fc-43f8-bed4-cf688ce2843a", 00:15:39.403 "strip_size_kb": 64, 00:15:39.403 "state": "online", 00:15:39.403 "raid_level": "raid5f", 00:15:39.403 "superblock": true, 00:15:39.403 "num_base_bdevs": 3, 00:15:39.403 "num_base_bdevs_discovered": 3, 00:15:39.403 "num_base_bdevs_operational": 3, 00:15:39.403 "process": { 00:15:39.403 "type": "rebuild", 00:15:39.403 "target": "spare", 00:15:39.403 "progress": { 00:15:39.403 "blocks": 18432, 00:15:39.403 "percent": 14 00:15:39.403 } 00:15:39.403 }, 00:15:39.403 "base_bdevs_list": [ 00:15:39.403 { 00:15:39.403 "name": "spare", 00:15:39.403 "uuid": "d826ba30-102e-5026-aafa-8fc5c1bfa457", 00:15:39.403 "is_configured": true, 00:15:39.403 "data_offset": 2048, 00:15:39.403 "data_size": 63488 00:15:39.403 }, 00:15:39.403 { 00:15:39.403 "name": "BaseBdev2", 00:15:39.403 "uuid": "3369d3de-77c7-5d03-82fa-f547ba09b7b8", 00:15:39.403 "is_configured": true, 00:15:39.403 "data_offset": 2048, 00:15:39.403 "data_size": 63488 00:15:39.403 }, 00:15:39.403 { 00:15:39.403 "name": "BaseBdev3", 00:15:39.403 "uuid": "1b5137bf-f6f9-5d64-9f6b-b53a11efca65", 00:15:39.403 "is_configured": true, 00:15:39.403 "data_offset": 2048, 00:15:39.403 "data_size": 63488 00:15:39.403 } 00:15:39.403 ] 00:15:39.403 }' 00:15:39.403 14:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.403 14:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:39.403 14:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.403 14:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:39.403 14:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:39.403 14:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:39.403 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:39.403 14:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:39.403 14:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:39.403 14:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=569 00:15:39.403 14:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:39.403 14:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:39.403 14:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.403 14:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:39.403 14:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:39.403 14:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.403 14:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.403 14:48:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.403 14:48:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.403 14:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.403 14:48:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.661 14:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.661 "name": "raid_bdev1", 00:15:39.661 "uuid": "50b15710-c6fc-43f8-bed4-cf688ce2843a", 00:15:39.661 "strip_size_kb": 64, 00:15:39.661 "state": "online", 00:15:39.661 "raid_level": "raid5f", 00:15:39.661 "superblock": true, 00:15:39.661 "num_base_bdevs": 3, 00:15:39.661 "num_base_bdevs_discovered": 3, 00:15:39.661 "num_base_bdevs_operational": 3, 00:15:39.661 "process": { 00:15:39.661 "type": "rebuild", 00:15:39.661 "target": "spare", 00:15:39.662 "progress": { 00:15:39.662 "blocks": 22528, 00:15:39.662 "percent": 17 00:15:39.662 } 00:15:39.662 }, 00:15:39.662 "base_bdevs_list": [ 00:15:39.662 { 00:15:39.662 "name": "spare", 00:15:39.662 "uuid": "d826ba30-102e-5026-aafa-8fc5c1bfa457", 00:15:39.662 "is_configured": true, 00:15:39.662 "data_offset": 2048, 00:15:39.662 "data_size": 63488 00:15:39.662 }, 00:15:39.662 { 00:15:39.662 "name": "BaseBdev2", 00:15:39.662 "uuid": "3369d3de-77c7-5d03-82fa-f547ba09b7b8", 00:15:39.662 "is_configured": true, 00:15:39.662 "data_offset": 2048, 00:15:39.662 "data_size": 63488 00:15:39.662 }, 00:15:39.662 { 00:15:39.662 "name": "BaseBdev3", 00:15:39.662 "uuid": "1b5137bf-f6f9-5d64-9f6b-b53a11efca65", 00:15:39.662 "is_configured": true, 00:15:39.662 "data_offset": 2048, 00:15:39.662 "data_size": 63488 00:15:39.662 } 00:15:39.662 ] 00:15:39.662 }' 00:15:39.662 14:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.662 14:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:39.662 14:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.662 14:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:39.662 14:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:40.599 14:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:40.599 14:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:40.599 14:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:40.599 14:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:40.599 14:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:40.599 14:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:40.599 14:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.599 14:48:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.599 14:48:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.599 14:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.599 14:48:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.599 14:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:40.599 "name": "raid_bdev1", 00:15:40.599 "uuid": "50b15710-c6fc-43f8-bed4-cf688ce2843a", 00:15:40.599 "strip_size_kb": 64, 00:15:40.599 "state": "online", 00:15:40.599 "raid_level": "raid5f", 00:15:40.599 "superblock": true, 00:15:40.599 "num_base_bdevs": 3, 00:15:40.599 "num_base_bdevs_discovered": 3, 00:15:40.599 "num_base_bdevs_operational": 3, 00:15:40.599 "process": { 00:15:40.599 "type": "rebuild", 00:15:40.599 "target": "spare", 00:15:40.599 "progress": { 00:15:40.599 "blocks": 45056, 00:15:40.599 "percent": 35 00:15:40.599 } 00:15:40.599 }, 00:15:40.599 "base_bdevs_list": [ 00:15:40.599 { 00:15:40.599 "name": "spare", 00:15:40.599 "uuid": "d826ba30-102e-5026-aafa-8fc5c1bfa457", 00:15:40.599 "is_configured": true, 00:15:40.599 "data_offset": 2048, 00:15:40.599 "data_size": 63488 00:15:40.599 }, 00:15:40.599 { 00:15:40.599 "name": "BaseBdev2", 00:15:40.599 "uuid": "3369d3de-77c7-5d03-82fa-f547ba09b7b8", 00:15:40.599 "is_configured": true, 00:15:40.599 "data_offset": 2048, 00:15:40.599 "data_size": 63488 00:15:40.599 }, 00:15:40.599 { 00:15:40.599 "name": "BaseBdev3", 00:15:40.599 "uuid": "1b5137bf-f6f9-5d64-9f6b-b53a11efca65", 00:15:40.599 "is_configured": true, 00:15:40.599 "data_offset": 2048, 00:15:40.599 "data_size": 63488 00:15:40.599 } 00:15:40.599 ] 00:15:40.599 }' 00:15:40.599 14:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:40.858 14:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:40.858 14:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:40.858 14:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:40.858 14:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:41.795 14:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:41.795 14:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:41.795 14:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.795 14:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:41.795 14:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:41.795 14:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.795 14:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.795 14:48:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.795 14:48:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.795 14:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.795 14:48:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.795 14:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.795 "name": "raid_bdev1", 00:15:41.795 "uuid": "50b15710-c6fc-43f8-bed4-cf688ce2843a", 00:15:41.795 "strip_size_kb": 64, 00:15:41.795 "state": "online", 00:15:41.795 "raid_level": "raid5f", 00:15:41.795 "superblock": true, 00:15:41.795 "num_base_bdevs": 3, 00:15:41.795 "num_base_bdevs_discovered": 3, 00:15:41.795 "num_base_bdevs_operational": 3, 00:15:41.795 "process": { 00:15:41.795 "type": "rebuild", 00:15:41.795 "target": "spare", 00:15:41.795 "progress": { 00:15:41.795 "blocks": 69632, 00:15:41.795 "percent": 54 00:15:41.795 } 00:15:41.795 }, 00:15:41.795 "base_bdevs_list": [ 00:15:41.795 { 00:15:41.795 "name": "spare", 00:15:41.795 "uuid": "d826ba30-102e-5026-aafa-8fc5c1bfa457", 00:15:41.795 "is_configured": true, 00:15:41.795 "data_offset": 2048, 00:15:41.795 "data_size": 63488 00:15:41.795 }, 00:15:41.795 { 00:15:41.795 "name": "BaseBdev2", 00:15:41.795 "uuid": "3369d3de-77c7-5d03-82fa-f547ba09b7b8", 00:15:41.795 "is_configured": true, 00:15:41.795 "data_offset": 2048, 00:15:41.795 "data_size": 63488 00:15:41.795 }, 00:15:41.795 { 00:15:41.795 "name": "BaseBdev3", 00:15:41.795 "uuid": "1b5137bf-f6f9-5d64-9f6b-b53a11efca65", 00:15:41.795 "is_configured": true, 00:15:41.795 "data_offset": 2048, 00:15:41.795 "data_size": 63488 00:15:41.795 } 00:15:41.795 ] 00:15:41.795 }' 00:15:41.795 14:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.795 14:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:41.795 14:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.054 14:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:42.054 14:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:42.994 14:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:42.994 14:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:42.994 14:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.994 14:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:42.994 14:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:42.994 14:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.994 14:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.994 14:48:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.994 14:48:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.994 14:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.994 14:48:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.994 14:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.994 "name": "raid_bdev1", 00:15:42.994 "uuid": "50b15710-c6fc-43f8-bed4-cf688ce2843a", 00:15:42.994 "strip_size_kb": 64, 00:15:42.994 "state": "online", 00:15:42.994 "raid_level": "raid5f", 00:15:42.994 "superblock": true, 00:15:42.994 "num_base_bdevs": 3, 00:15:42.994 "num_base_bdevs_discovered": 3, 00:15:42.994 "num_base_bdevs_operational": 3, 00:15:42.994 "process": { 00:15:42.994 "type": "rebuild", 00:15:42.994 "target": "spare", 00:15:42.994 "progress": { 00:15:42.994 "blocks": 92160, 00:15:42.994 "percent": 72 00:15:42.994 } 00:15:42.994 }, 00:15:42.994 "base_bdevs_list": [ 00:15:42.994 { 00:15:42.994 "name": "spare", 00:15:42.994 "uuid": "d826ba30-102e-5026-aafa-8fc5c1bfa457", 00:15:42.994 "is_configured": true, 00:15:42.994 "data_offset": 2048, 00:15:42.994 "data_size": 63488 00:15:42.994 }, 00:15:42.994 { 00:15:42.994 "name": "BaseBdev2", 00:15:42.994 "uuid": "3369d3de-77c7-5d03-82fa-f547ba09b7b8", 00:15:42.994 "is_configured": true, 00:15:42.994 "data_offset": 2048, 00:15:42.994 "data_size": 63488 00:15:42.994 }, 00:15:42.994 { 00:15:42.994 "name": "BaseBdev3", 00:15:42.994 "uuid": "1b5137bf-f6f9-5d64-9f6b-b53a11efca65", 00:15:42.994 "is_configured": true, 00:15:42.994 "data_offset": 2048, 00:15:42.994 "data_size": 63488 00:15:42.994 } 00:15:42.994 ] 00:15:42.994 }' 00:15:42.994 14:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.994 14:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:42.994 14:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.254 14:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:43.254 14:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:44.193 14:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:44.193 14:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:44.193 14:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.193 14:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:44.193 14:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:44.193 14:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.193 14:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.193 14:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.193 14:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.193 14:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.193 14:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.193 14:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.193 "name": "raid_bdev1", 00:15:44.193 "uuid": "50b15710-c6fc-43f8-bed4-cf688ce2843a", 00:15:44.193 "strip_size_kb": 64, 00:15:44.193 "state": "online", 00:15:44.193 "raid_level": "raid5f", 00:15:44.193 "superblock": true, 00:15:44.193 "num_base_bdevs": 3, 00:15:44.193 "num_base_bdevs_discovered": 3, 00:15:44.193 "num_base_bdevs_operational": 3, 00:15:44.193 "process": { 00:15:44.193 "type": "rebuild", 00:15:44.193 "target": "spare", 00:15:44.193 "progress": { 00:15:44.193 "blocks": 116736, 00:15:44.193 "percent": 91 00:15:44.193 } 00:15:44.193 }, 00:15:44.193 "base_bdevs_list": [ 00:15:44.193 { 00:15:44.193 "name": "spare", 00:15:44.193 "uuid": "d826ba30-102e-5026-aafa-8fc5c1bfa457", 00:15:44.193 "is_configured": true, 00:15:44.193 "data_offset": 2048, 00:15:44.193 "data_size": 63488 00:15:44.193 }, 00:15:44.193 { 00:15:44.193 "name": "BaseBdev2", 00:15:44.193 "uuid": "3369d3de-77c7-5d03-82fa-f547ba09b7b8", 00:15:44.193 "is_configured": true, 00:15:44.193 "data_offset": 2048, 00:15:44.193 "data_size": 63488 00:15:44.193 }, 00:15:44.193 { 00:15:44.193 "name": "BaseBdev3", 00:15:44.193 "uuid": "1b5137bf-f6f9-5d64-9f6b-b53a11efca65", 00:15:44.193 "is_configured": true, 00:15:44.193 "data_offset": 2048, 00:15:44.193 "data_size": 63488 00:15:44.193 } 00:15:44.193 ] 00:15:44.193 }' 00:15:44.193 14:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.193 14:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:44.193 14:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.193 14:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:44.193 14:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:44.794 [2024-12-09 14:48:22.598987] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:44.794 [2024-12-09 14:48:22.599074] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:44.794 [2024-12-09 14:48:22.599245] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.363 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:45.363 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:45.363 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.363 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:45.363 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:45.363 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.363 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.363 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.363 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.363 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.363 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.363 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.363 "name": "raid_bdev1", 00:15:45.363 "uuid": "50b15710-c6fc-43f8-bed4-cf688ce2843a", 00:15:45.363 "strip_size_kb": 64, 00:15:45.363 "state": "online", 00:15:45.363 "raid_level": "raid5f", 00:15:45.363 "superblock": true, 00:15:45.363 "num_base_bdevs": 3, 00:15:45.363 "num_base_bdevs_discovered": 3, 00:15:45.363 "num_base_bdevs_operational": 3, 00:15:45.363 "base_bdevs_list": [ 00:15:45.363 { 00:15:45.363 "name": "spare", 00:15:45.363 "uuid": "d826ba30-102e-5026-aafa-8fc5c1bfa457", 00:15:45.363 "is_configured": true, 00:15:45.363 "data_offset": 2048, 00:15:45.363 "data_size": 63488 00:15:45.363 }, 00:15:45.363 { 00:15:45.363 "name": "BaseBdev2", 00:15:45.363 "uuid": "3369d3de-77c7-5d03-82fa-f547ba09b7b8", 00:15:45.363 "is_configured": true, 00:15:45.363 "data_offset": 2048, 00:15:45.363 "data_size": 63488 00:15:45.363 }, 00:15:45.363 { 00:15:45.363 "name": "BaseBdev3", 00:15:45.363 "uuid": "1b5137bf-f6f9-5d64-9f6b-b53a11efca65", 00:15:45.363 "is_configured": true, 00:15:45.363 "data_offset": 2048, 00:15:45.363 "data_size": 63488 00:15:45.363 } 00:15:45.363 ] 00:15:45.363 }' 00:15:45.363 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.363 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:45.363 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.363 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:45.363 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:45.363 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:45.363 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.363 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:45.363 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:45.363 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.363 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.363 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.363 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.363 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.363 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.363 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.363 "name": "raid_bdev1", 00:15:45.363 "uuid": "50b15710-c6fc-43f8-bed4-cf688ce2843a", 00:15:45.363 "strip_size_kb": 64, 00:15:45.363 "state": "online", 00:15:45.363 "raid_level": "raid5f", 00:15:45.363 "superblock": true, 00:15:45.363 "num_base_bdevs": 3, 00:15:45.363 "num_base_bdevs_discovered": 3, 00:15:45.363 "num_base_bdevs_operational": 3, 00:15:45.363 "base_bdevs_list": [ 00:15:45.363 { 00:15:45.363 "name": "spare", 00:15:45.363 "uuid": "d826ba30-102e-5026-aafa-8fc5c1bfa457", 00:15:45.363 "is_configured": true, 00:15:45.363 "data_offset": 2048, 00:15:45.363 "data_size": 63488 00:15:45.363 }, 00:15:45.363 { 00:15:45.363 "name": "BaseBdev2", 00:15:45.363 "uuid": "3369d3de-77c7-5d03-82fa-f547ba09b7b8", 00:15:45.363 "is_configured": true, 00:15:45.363 "data_offset": 2048, 00:15:45.363 "data_size": 63488 00:15:45.363 }, 00:15:45.363 { 00:15:45.363 "name": "BaseBdev3", 00:15:45.363 "uuid": "1b5137bf-f6f9-5d64-9f6b-b53a11efca65", 00:15:45.363 "is_configured": true, 00:15:45.363 "data_offset": 2048, 00:15:45.363 "data_size": 63488 00:15:45.363 } 00:15:45.363 ] 00:15:45.363 }' 00:15:45.363 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.623 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:45.623 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.623 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:45.623 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:45.623 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:45.623 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.623 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.623 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.623 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:45.623 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.623 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.623 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.623 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.623 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.623 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.623 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.623 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.623 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.623 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.623 "name": "raid_bdev1", 00:15:45.623 "uuid": "50b15710-c6fc-43f8-bed4-cf688ce2843a", 00:15:45.623 "strip_size_kb": 64, 00:15:45.623 "state": "online", 00:15:45.623 "raid_level": "raid5f", 00:15:45.623 "superblock": true, 00:15:45.623 "num_base_bdevs": 3, 00:15:45.623 "num_base_bdevs_discovered": 3, 00:15:45.623 "num_base_bdevs_operational": 3, 00:15:45.623 "base_bdevs_list": [ 00:15:45.623 { 00:15:45.623 "name": "spare", 00:15:45.623 "uuid": "d826ba30-102e-5026-aafa-8fc5c1bfa457", 00:15:45.623 "is_configured": true, 00:15:45.623 "data_offset": 2048, 00:15:45.623 "data_size": 63488 00:15:45.623 }, 00:15:45.623 { 00:15:45.623 "name": "BaseBdev2", 00:15:45.623 "uuid": "3369d3de-77c7-5d03-82fa-f547ba09b7b8", 00:15:45.623 "is_configured": true, 00:15:45.623 "data_offset": 2048, 00:15:45.623 "data_size": 63488 00:15:45.623 }, 00:15:45.623 { 00:15:45.623 "name": "BaseBdev3", 00:15:45.623 "uuid": "1b5137bf-f6f9-5d64-9f6b-b53a11efca65", 00:15:45.624 "is_configured": true, 00:15:45.624 "data_offset": 2048, 00:15:45.624 "data_size": 63488 00:15:45.624 } 00:15:45.624 ] 00:15:45.624 }' 00:15:45.624 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.624 14:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.192 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:46.192 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.192 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.192 [2024-12-09 14:48:24.035465] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:46.192 [2024-12-09 14:48:24.035498] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:46.192 [2024-12-09 14:48:24.035623] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:46.192 [2024-12-09 14:48:24.035719] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:46.193 [2024-12-09 14:48:24.035735] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:46.193 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.193 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.193 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.193 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.193 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:46.193 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.193 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:46.193 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:46.193 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:46.193 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:46.193 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:46.193 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:46.193 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:46.193 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:46.193 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:46.193 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:46.193 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:46.193 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:46.193 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:46.452 /dev/nbd0 00:15:46.452 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:46.452 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:46.452 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:46.452 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:46.452 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:46.452 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:46.452 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:46.452 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:46.452 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:46.452 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:46.453 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:46.453 1+0 records in 00:15:46.453 1+0 records out 00:15:46.453 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311918 s, 13.1 MB/s 00:15:46.453 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:46.453 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:46.453 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:46.453 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:46.453 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:46.453 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:46.453 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:46.453 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:46.713 /dev/nbd1 00:15:46.713 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:46.713 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:46.713 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:46.713 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:46.713 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:46.713 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:46.713 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:46.713 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:46.713 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:46.713 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:46.713 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:46.713 1+0 records in 00:15:46.713 1+0 records out 00:15:46.713 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459495 s, 8.9 MB/s 00:15:46.713 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:46.713 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:46.713 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:46.713 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:46.713 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:46.713 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:46.713 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:46.713 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:46.972 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:46.972 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:46.973 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:46.973 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:46.973 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:46.973 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:46.973 14:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:46.973 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:46.973 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:46.973 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:46.973 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:46.973 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:46.973 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:47.231 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:47.231 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:47.231 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:47.231 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:47.231 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:47.231 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:47.231 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:47.231 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:47.231 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:47.231 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:47.231 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:47.231 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:47.231 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:47.231 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:47.231 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.231 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.231 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.231 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:47.231 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.231 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.232 [2024-12-09 14:48:25.343178] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:47.232 [2024-12-09 14:48:25.343288] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.232 [2024-12-09 14:48:25.343318] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:47.232 [2024-12-09 14:48:25.343333] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.232 [2024-12-09 14:48:25.346033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.232 [2024-12-09 14:48:25.346078] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:47.232 [2024-12-09 14:48:25.346171] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:47.232 [2024-12-09 14:48:25.346242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:47.232 [2024-12-09 14:48:25.346461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:47.232 [2024-12-09 14:48:25.346581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:47.232 spare 00:15:47.232 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.232 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:47.232 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.232 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.490 [2024-12-09 14:48:25.446507] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:47.490 [2024-12-09 14:48:25.446542] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:47.490 [2024-12-09 14:48:25.446856] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:15:47.490 [2024-12-09 14:48:25.452386] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:47.490 [2024-12-09 14:48:25.452408] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:47.490 [2024-12-09 14:48:25.452607] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.490 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.490 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:47.490 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:47.490 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.490 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.490 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.490 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:47.490 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.490 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.490 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.490 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.490 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.490 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.490 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.490 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.490 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.490 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.490 "name": "raid_bdev1", 00:15:47.491 "uuid": "50b15710-c6fc-43f8-bed4-cf688ce2843a", 00:15:47.491 "strip_size_kb": 64, 00:15:47.491 "state": "online", 00:15:47.491 "raid_level": "raid5f", 00:15:47.491 "superblock": true, 00:15:47.491 "num_base_bdevs": 3, 00:15:47.491 "num_base_bdevs_discovered": 3, 00:15:47.491 "num_base_bdevs_operational": 3, 00:15:47.491 "base_bdevs_list": [ 00:15:47.491 { 00:15:47.491 "name": "spare", 00:15:47.491 "uuid": "d826ba30-102e-5026-aafa-8fc5c1bfa457", 00:15:47.491 "is_configured": true, 00:15:47.491 "data_offset": 2048, 00:15:47.491 "data_size": 63488 00:15:47.491 }, 00:15:47.491 { 00:15:47.491 "name": "BaseBdev2", 00:15:47.491 "uuid": "3369d3de-77c7-5d03-82fa-f547ba09b7b8", 00:15:47.491 "is_configured": true, 00:15:47.491 "data_offset": 2048, 00:15:47.491 "data_size": 63488 00:15:47.491 }, 00:15:47.491 { 00:15:47.491 "name": "BaseBdev3", 00:15:47.491 "uuid": "1b5137bf-f6f9-5d64-9f6b-b53a11efca65", 00:15:47.491 "is_configured": true, 00:15:47.491 "data_offset": 2048, 00:15:47.491 "data_size": 63488 00:15:47.491 } 00:15:47.491 ] 00:15:47.491 }' 00:15:47.491 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.491 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.058 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:48.058 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.058 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:48.058 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:48.058 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.058 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.058 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.058 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.058 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.058 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.058 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.058 "name": "raid_bdev1", 00:15:48.058 "uuid": "50b15710-c6fc-43f8-bed4-cf688ce2843a", 00:15:48.058 "strip_size_kb": 64, 00:15:48.058 "state": "online", 00:15:48.058 "raid_level": "raid5f", 00:15:48.058 "superblock": true, 00:15:48.058 "num_base_bdevs": 3, 00:15:48.058 "num_base_bdevs_discovered": 3, 00:15:48.058 "num_base_bdevs_operational": 3, 00:15:48.058 "base_bdevs_list": [ 00:15:48.058 { 00:15:48.058 "name": "spare", 00:15:48.058 "uuid": "d826ba30-102e-5026-aafa-8fc5c1bfa457", 00:15:48.059 "is_configured": true, 00:15:48.059 "data_offset": 2048, 00:15:48.059 "data_size": 63488 00:15:48.059 }, 00:15:48.059 { 00:15:48.059 "name": "BaseBdev2", 00:15:48.059 "uuid": "3369d3de-77c7-5d03-82fa-f547ba09b7b8", 00:15:48.059 "is_configured": true, 00:15:48.059 "data_offset": 2048, 00:15:48.059 "data_size": 63488 00:15:48.059 }, 00:15:48.059 { 00:15:48.059 "name": "BaseBdev3", 00:15:48.059 "uuid": "1b5137bf-f6f9-5d64-9f6b-b53a11efca65", 00:15:48.059 "is_configured": true, 00:15:48.059 "data_offset": 2048, 00:15:48.059 "data_size": 63488 00:15:48.059 } 00:15:48.059 ] 00:15:48.059 }' 00:15:48.059 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.059 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:48.059 14:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.059 14:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:48.059 14:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:48.059 14:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.059 14:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.059 14:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.059 14:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.059 14:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:48.059 14:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:48.059 14:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.059 14:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.059 [2024-12-09 14:48:26.085799] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:48.059 14:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.059 14:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:48.059 14:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.059 14:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.059 14:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.059 14:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.059 14:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:48.059 14:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.059 14:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.059 14:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.059 14:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.059 14:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.059 14:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.059 14:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.059 14:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.059 14:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.059 14:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.059 "name": "raid_bdev1", 00:15:48.059 "uuid": "50b15710-c6fc-43f8-bed4-cf688ce2843a", 00:15:48.059 "strip_size_kb": 64, 00:15:48.059 "state": "online", 00:15:48.059 "raid_level": "raid5f", 00:15:48.059 "superblock": true, 00:15:48.059 "num_base_bdevs": 3, 00:15:48.059 "num_base_bdevs_discovered": 2, 00:15:48.059 "num_base_bdevs_operational": 2, 00:15:48.059 "base_bdevs_list": [ 00:15:48.059 { 00:15:48.059 "name": null, 00:15:48.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.059 "is_configured": false, 00:15:48.059 "data_offset": 0, 00:15:48.059 "data_size": 63488 00:15:48.059 }, 00:15:48.059 { 00:15:48.059 "name": "BaseBdev2", 00:15:48.059 "uuid": "3369d3de-77c7-5d03-82fa-f547ba09b7b8", 00:15:48.059 "is_configured": true, 00:15:48.059 "data_offset": 2048, 00:15:48.059 "data_size": 63488 00:15:48.059 }, 00:15:48.059 { 00:15:48.059 "name": "BaseBdev3", 00:15:48.059 "uuid": "1b5137bf-f6f9-5d64-9f6b-b53a11efca65", 00:15:48.059 "is_configured": true, 00:15:48.059 "data_offset": 2048, 00:15:48.059 "data_size": 63488 00:15:48.059 } 00:15:48.059 ] 00:15:48.059 }' 00:15:48.059 14:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.059 14:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.627 14:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:48.627 14:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.627 14:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.627 [2024-12-09 14:48:26.517100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:48.627 [2024-12-09 14:48:26.517370] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:48.627 [2024-12-09 14:48:26.517394] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:48.627 [2024-12-09 14:48:26.517441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:48.627 [2024-12-09 14:48:26.532587] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:15:48.627 14:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.627 14:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:48.627 [2024-12-09 14:48:26.539750] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:49.565 14:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:49.565 14:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.565 14:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:49.565 14:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:49.565 14:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.565 14:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.565 14:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.565 14:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.565 14:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.565 14:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.565 14:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.565 "name": "raid_bdev1", 00:15:49.565 "uuid": "50b15710-c6fc-43f8-bed4-cf688ce2843a", 00:15:49.565 "strip_size_kb": 64, 00:15:49.565 "state": "online", 00:15:49.565 "raid_level": "raid5f", 00:15:49.565 "superblock": true, 00:15:49.565 "num_base_bdevs": 3, 00:15:49.565 "num_base_bdevs_discovered": 3, 00:15:49.565 "num_base_bdevs_operational": 3, 00:15:49.565 "process": { 00:15:49.565 "type": "rebuild", 00:15:49.565 "target": "spare", 00:15:49.565 "progress": { 00:15:49.565 "blocks": 18432, 00:15:49.565 "percent": 14 00:15:49.565 } 00:15:49.565 }, 00:15:49.565 "base_bdevs_list": [ 00:15:49.565 { 00:15:49.566 "name": "spare", 00:15:49.566 "uuid": "d826ba30-102e-5026-aafa-8fc5c1bfa457", 00:15:49.566 "is_configured": true, 00:15:49.566 "data_offset": 2048, 00:15:49.566 "data_size": 63488 00:15:49.566 }, 00:15:49.566 { 00:15:49.566 "name": "BaseBdev2", 00:15:49.566 "uuid": "3369d3de-77c7-5d03-82fa-f547ba09b7b8", 00:15:49.566 "is_configured": true, 00:15:49.566 "data_offset": 2048, 00:15:49.566 "data_size": 63488 00:15:49.566 }, 00:15:49.566 { 00:15:49.566 "name": "BaseBdev3", 00:15:49.566 "uuid": "1b5137bf-f6f9-5d64-9f6b-b53a11efca65", 00:15:49.566 "is_configured": true, 00:15:49.566 "data_offset": 2048, 00:15:49.566 "data_size": 63488 00:15:49.566 } 00:15:49.566 ] 00:15:49.566 }' 00:15:49.566 14:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.566 14:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:49.566 14:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.566 14:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:49.566 14:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:49.566 14:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.566 14:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.566 [2024-12-09 14:48:27.663307] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:49.824 [2024-12-09 14:48:27.750311] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:49.824 [2024-12-09 14:48:27.750457] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.824 [2024-12-09 14:48:27.750512] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:49.824 [2024-12-09 14:48:27.750567] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:49.824 14:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.824 14:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:49.824 14:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.824 14:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.824 14:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.825 14:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.825 14:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:49.825 14:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.825 14:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.825 14:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.825 14:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.825 14:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.825 14:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.825 14:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.825 14:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.825 14:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.825 14:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.825 "name": "raid_bdev1", 00:15:49.825 "uuid": "50b15710-c6fc-43f8-bed4-cf688ce2843a", 00:15:49.825 "strip_size_kb": 64, 00:15:49.825 "state": "online", 00:15:49.825 "raid_level": "raid5f", 00:15:49.825 "superblock": true, 00:15:49.825 "num_base_bdevs": 3, 00:15:49.825 "num_base_bdevs_discovered": 2, 00:15:49.825 "num_base_bdevs_operational": 2, 00:15:49.825 "base_bdevs_list": [ 00:15:49.825 { 00:15:49.825 "name": null, 00:15:49.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.825 "is_configured": false, 00:15:49.825 "data_offset": 0, 00:15:49.825 "data_size": 63488 00:15:49.825 }, 00:15:49.825 { 00:15:49.825 "name": "BaseBdev2", 00:15:49.825 "uuid": "3369d3de-77c7-5d03-82fa-f547ba09b7b8", 00:15:49.825 "is_configured": true, 00:15:49.825 "data_offset": 2048, 00:15:49.825 "data_size": 63488 00:15:49.825 }, 00:15:49.825 { 00:15:49.825 "name": "BaseBdev3", 00:15:49.825 "uuid": "1b5137bf-f6f9-5d64-9f6b-b53a11efca65", 00:15:49.825 "is_configured": true, 00:15:49.825 "data_offset": 2048, 00:15:49.825 "data_size": 63488 00:15:49.825 } 00:15:49.825 ] 00:15:49.825 }' 00:15:49.825 14:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.825 14:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.084 14:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:50.084 14:48:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.084 14:48:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.084 [2024-12-09 14:48:28.193260] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:50.084 [2024-12-09 14:48:28.193393] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.084 [2024-12-09 14:48:28.193471] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:15:50.084 [2024-12-09 14:48:28.193528] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.084 [2024-12-09 14:48:28.194165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.084 [2024-12-09 14:48:28.194250] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:50.084 [2024-12-09 14:48:28.194446] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:50.084 [2024-12-09 14:48:28.194511] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:50.084 [2024-12-09 14:48:28.194562] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:50.084 [2024-12-09 14:48:28.194675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:50.344 [2024-12-09 14:48:28.211699] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:15:50.344 spare 00:15:50.344 14:48:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.344 14:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:50.344 [2024-12-09 14:48:28.219315] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:51.287 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:51.287 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:51.287 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:51.287 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:51.287 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:51.287 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.287 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.287 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.287 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.287 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.287 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:51.287 "name": "raid_bdev1", 00:15:51.287 "uuid": "50b15710-c6fc-43f8-bed4-cf688ce2843a", 00:15:51.287 "strip_size_kb": 64, 00:15:51.287 "state": "online", 00:15:51.287 "raid_level": "raid5f", 00:15:51.287 "superblock": true, 00:15:51.287 "num_base_bdevs": 3, 00:15:51.287 "num_base_bdevs_discovered": 3, 00:15:51.287 "num_base_bdevs_operational": 3, 00:15:51.287 "process": { 00:15:51.287 "type": "rebuild", 00:15:51.287 "target": "spare", 00:15:51.287 "progress": { 00:15:51.287 "blocks": 20480, 00:15:51.287 "percent": 16 00:15:51.287 } 00:15:51.287 }, 00:15:51.287 "base_bdevs_list": [ 00:15:51.287 { 00:15:51.287 "name": "spare", 00:15:51.287 "uuid": "d826ba30-102e-5026-aafa-8fc5c1bfa457", 00:15:51.287 "is_configured": true, 00:15:51.287 "data_offset": 2048, 00:15:51.287 "data_size": 63488 00:15:51.287 }, 00:15:51.287 { 00:15:51.287 "name": "BaseBdev2", 00:15:51.287 "uuid": "3369d3de-77c7-5d03-82fa-f547ba09b7b8", 00:15:51.287 "is_configured": true, 00:15:51.287 "data_offset": 2048, 00:15:51.287 "data_size": 63488 00:15:51.287 }, 00:15:51.287 { 00:15:51.287 "name": "BaseBdev3", 00:15:51.287 "uuid": "1b5137bf-f6f9-5d64-9f6b-b53a11efca65", 00:15:51.287 "is_configured": true, 00:15:51.287 "data_offset": 2048, 00:15:51.287 "data_size": 63488 00:15:51.287 } 00:15:51.287 ] 00:15:51.287 }' 00:15:51.287 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:51.287 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:51.287 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:51.287 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:51.287 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:51.287 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.287 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.287 [2024-12-09 14:48:29.362690] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:51.547 [2024-12-09 14:48:29.430153] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:51.547 [2024-12-09 14:48:29.430221] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.547 [2024-12-09 14:48:29.430245] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:51.547 [2024-12-09 14:48:29.430255] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:51.547 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.547 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:51.547 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.547 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.547 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.547 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.547 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:51.547 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.547 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.547 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.547 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.547 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.547 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.547 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.547 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.547 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.547 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.547 "name": "raid_bdev1", 00:15:51.547 "uuid": "50b15710-c6fc-43f8-bed4-cf688ce2843a", 00:15:51.547 "strip_size_kb": 64, 00:15:51.547 "state": "online", 00:15:51.547 "raid_level": "raid5f", 00:15:51.547 "superblock": true, 00:15:51.547 "num_base_bdevs": 3, 00:15:51.547 "num_base_bdevs_discovered": 2, 00:15:51.547 "num_base_bdevs_operational": 2, 00:15:51.547 "base_bdevs_list": [ 00:15:51.547 { 00:15:51.547 "name": null, 00:15:51.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.547 "is_configured": false, 00:15:51.547 "data_offset": 0, 00:15:51.547 "data_size": 63488 00:15:51.547 }, 00:15:51.547 { 00:15:51.547 "name": "BaseBdev2", 00:15:51.547 "uuid": "3369d3de-77c7-5d03-82fa-f547ba09b7b8", 00:15:51.547 "is_configured": true, 00:15:51.547 "data_offset": 2048, 00:15:51.547 "data_size": 63488 00:15:51.547 }, 00:15:51.547 { 00:15:51.547 "name": "BaseBdev3", 00:15:51.547 "uuid": "1b5137bf-f6f9-5d64-9f6b-b53a11efca65", 00:15:51.547 "is_configured": true, 00:15:51.547 "data_offset": 2048, 00:15:51.547 "data_size": 63488 00:15:51.547 } 00:15:51.547 ] 00:15:51.547 }' 00:15:51.547 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.547 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.807 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:51.807 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:51.807 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:51.807 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:51.807 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:51.807 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.807 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.066 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.066 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.066 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.066 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:52.066 "name": "raid_bdev1", 00:15:52.066 "uuid": "50b15710-c6fc-43f8-bed4-cf688ce2843a", 00:15:52.066 "strip_size_kb": 64, 00:15:52.066 "state": "online", 00:15:52.066 "raid_level": "raid5f", 00:15:52.066 "superblock": true, 00:15:52.066 "num_base_bdevs": 3, 00:15:52.066 "num_base_bdevs_discovered": 2, 00:15:52.066 "num_base_bdevs_operational": 2, 00:15:52.066 "base_bdevs_list": [ 00:15:52.066 { 00:15:52.066 "name": null, 00:15:52.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.066 "is_configured": false, 00:15:52.066 "data_offset": 0, 00:15:52.066 "data_size": 63488 00:15:52.066 }, 00:15:52.066 { 00:15:52.066 "name": "BaseBdev2", 00:15:52.066 "uuid": "3369d3de-77c7-5d03-82fa-f547ba09b7b8", 00:15:52.066 "is_configured": true, 00:15:52.066 "data_offset": 2048, 00:15:52.066 "data_size": 63488 00:15:52.066 }, 00:15:52.066 { 00:15:52.066 "name": "BaseBdev3", 00:15:52.066 "uuid": "1b5137bf-f6f9-5d64-9f6b-b53a11efca65", 00:15:52.066 "is_configured": true, 00:15:52.066 "data_offset": 2048, 00:15:52.066 "data_size": 63488 00:15:52.066 } 00:15:52.066 ] 00:15:52.066 }' 00:15:52.066 14:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:52.066 14:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:52.066 14:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:52.066 14:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:52.066 14:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:52.066 14:48:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.066 14:48:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.066 14:48:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.066 14:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:52.066 14:48:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.066 14:48:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.066 [2024-12-09 14:48:30.077247] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:52.066 [2024-12-09 14:48:30.077313] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.066 [2024-12-09 14:48:30.077349] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:52.066 [2024-12-09 14:48:30.077361] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.066 [2024-12-09 14:48:30.077993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.066 [2024-12-09 14:48:30.078031] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:52.066 [2024-12-09 14:48:30.078158] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:52.066 [2024-12-09 14:48:30.078182] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:52.066 [2024-12-09 14:48:30.078215] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:52.066 [2024-12-09 14:48:30.078231] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:52.066 BaseBdev1 00:15:52.066 14:48:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.066 14:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:53.022 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:53.022 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:53.022 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:53.022 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:53.022 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.022 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:53.022 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.022 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.022 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.022 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.022 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.022 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.022 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.022 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.022 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.022 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.022 "name": "raid_bdev1", 00:15:53.022 "uuid": "50b15710-c6fc-43f8-bed4-cf688ce2843a", 00:15:53.022 "strip_size_kb": 64, 00:15:53.022 "state": "online", 00:15:53.022 "raid_level": "raid5f", 00:15:53.022 "superblock": true, 00:15:53.022 "num_base_bdevs": 3, 00:15:53.022 "num_base_bdevs_discovered": 2, 00:15:53.022 "num_base_bdevs_operational": 2, 00:15:53.022 "base_bdevs_list": [ 00:15:53.022 { 00:15:53.022 "name": null, 00:15:53.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.022 "is_configured": false, 00:15:53.022 "data_offset": 0, 00:15:53.022 "data_size": 63488 00:15:53.022 }, 00:15:53.022 { 00:15:53.022 "name": "BaseBdev2", 00:15:53.022 "uuid": "3369d3de-77c7-5d03-82fa-f547ba09b7b8", 00:15:53.022 "is_configured": true, 00:15:53.022 "data_offset": 2048, 00:15:53.022 "data_size": 63488 00:15:53.022 }, 00:15:53.022 { 00:15:53.022 "name": "BaseBdev3", 00:15:53.022 "uuid": "1b5137bf-f6f9-5d64-9f6b-b53a11efca65", 00:15:53.022 "is_configured": true, 00:15:53.022 "data_offset": 2048, 00:15:53.022 "data_size": 63488 00:15:53.022 } 00:15:53.022 ] 00:15:53.022 }' 00:15:53.022 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.022 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.591 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:53.591 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:53.591 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:53.591 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:53.591 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:53.591 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.591 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.591 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.591 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.591 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.591 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:53.591 "name": "raid_bdev1", 00:15:53.591 "uuid": "50b15710-c6fc-43f8-bed4-cf688ce2843a", 00:15:53.591 "strip_size_kb": 64, 00:15:53.591 "state": "online", 00:15:53.591 "raid_level": "raid5f", 00:15:53.591 "superblock": true, 00:15:53.591 "num_base_bdevs": 3, 00:15:53.591 "num_base_bdevs_discovered": 2, 00:15:53.591 "num_base_bdevs_operational": 2, 00:15:53.591 "base_bdevs_list": [ 00:15:53.591 { 00:15:53.591 "name": null, 00:15:53.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.591 "is_configured": false, 00:15:53.591 "data_offset": 0, 00:15:53.591 "data_size": 63488 00:15:53.591 }, 00:15:53.591 { 00:15:53.591 "name": "BaseBdev2", 00:15:53.591 "uuid": "3369d3de-77c7-5d03-82fa-f547ba09b7b8", 00:15:53.591 "is_configured": true, 00:15:53.591 "data_offset": 2048, 00:15:53.591 "data_size": 63488 00:15:53.591 }, 00:15:53.591 { 00:15:53.591 "name": "BaseBdev3", 00:15:53.591 "uuid": "1b5137bf-f6f9-5d64-9f6b-b53a11efca65", 00:15:53.591 "is_configured": true, 00:15:53.591 "data_offset": 2048, 00:15:53.591 "data_size": 63488 00:15:53.591 } 00:15:53.591 ] 00:15:53.591 }' 00:15:53.591 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:53.591 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:53.591 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:53.591 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:53.591 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:53.591 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:15:53.591 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:53.591 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:53.591 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:53.591 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:53.591 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:53.591 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:53.591 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.591 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.591 [2024-12-09 14:48:31.642819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:53.591 [2024-12-09 14:48:31.643034] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:53.591 [2024-12-09 14:48:31.643097] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:53.591 request: 00:15:53.591 { 00:15:53.591 "base_bdev": "BaseBdev1", 00:15:53.591 "raid_bdev": "raid_bdev1", 00:15:53.591 "method": "bdev_raid_add_base_bdev", 00:15:53.591 "req_id": 1 00:15:53.591 } 00:15:53.591 Got JSON-RPC error response 00:15:53.591 response: 00:15:53.591 { 00:15:53.591 "code": -22, 00:15:53.591 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:53.591 } 00:15:53.591 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:53.591 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:15:53.591 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:53.591 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:53.591 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:53.591 14:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:54.973 14:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:54.973 14:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.973 14:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.973 14:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.973 14:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.973 14:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:54.973 14:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.973 14:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.973 14:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.973 14:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.973 14:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.973 14:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.973 14:48:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.973 14:48:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.973 14:48:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.973 14:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.973 "name": "raid_bdev1", 00:15:54.973 "uuid": "50b15710-c6fc-43f8-bed4-cf688ce2843a", 00:15:54.973 "strip_size_kb": 64, 00:15:54.973 "state": "online", 00:15:54.973 "raid_level": "raid5f", 00:15:54.973 "superblock": true, 00:15:54.973 "num_base_bdevs": 3, 00:15:54.973 "num_base_bdevs_discovered": 2, 00:15:54.973 "num_base_bdevs_operational": 2, 00:15:54.973 "base_bdevs_list": [ 00:15:54.973 { 00:15:54.973 "name": null, 00:15:54.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.973 "is_configured": false, 00:15:54.973 "data_offset": 0, 00:15:54.973 "data_size": 63488 00:15:54.973 }, 00:15:54.973 { 00:15:54.973 "name": "BaseBdev2", 00:15:54.973 "uuid": "3369d3de-77c7-5d03-82fa-f547ba09b7b8", 00:15:54.973 "is_configured": true, 00:15:54.973 "data_offset": 2048, 00:15:54.973 "data_size": 63488 00:15:54.973 }, 00:15:54.973 { 00:15:54.973 "name": "BaseBdev3", 00:15:54.973 "uuid": "1b5137bf-f6f9-5d64-9f6b-b53a11efca65", 00:15:54.973 "is_configured": true, 00:15:54.973 "data_offset": 2048, 00:15:54.973 "data_size": 63488 00:15:54.973 } 00:15:54.973 ] 00:15:54.973 }' 00:15:54.973 14:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.973 14:48:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.973 14:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:54.973 14:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.973 14:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:54.973 14:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:54.973 14:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.973 14:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.973 14:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.973 14:48:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.973 14:48:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.233 14:48:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.233 14:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:55.233 "name": "raid_bdev1", 00:15:55.233 "uuid": "50b15710-c6fc-43f8-bed4-cf688ce2843a", 00:15:55.233 "strip_size_kb": 64, 00:15:55.233 "state": "online", 00:15:55.233 "raid_level": "raid5f", 00:15:55.233 "superblock": true, 00:15:55.233 "num_base_bdevs": 3, 00:15:55.233 "num_base_bdevs_discovered": 2, 00:15:55.233 "num_base_bdevs_operational": 2, 00:15:55.233 "base_bdevs_list": [ 00:15:55.233 { 00:15:55.233 "name": null, 00:15:55.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.233 "is_configured": false, 00:15:55.233 "data_offset": 0, 00:15:55.233 "data_size": 63488 00:15:55.233 }, 00:15:55.233 { 00:15:55.233 "name": "BaseBdev2", 00:15:55.233 "uuid": "3369d3de-77c7-5d03-82fa-f547ba09b7b8", 00:15:55.233 "is_configured": true, 00:15:55.233 "data_offset": 2048, 00:15:55.233 "data_size": 63488 00:15:55.233 }, 00:15:55.233 { 00:15:55.233 "name": "BaseBdev3", 00:15:55.233 "uuid": "1b5137bf-f6f9-5d64-9f6b-b53a11efca65", 00:15:55.233 "is_configured": true, 00:15:55.233 "data_offset": 2048, 00:15:55.233 "data_size": 63488 00:15:55.233 } 00:15:55.233 ] 00:15:55.233 }' 00:15:55.233 14:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:55.233 14:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:55.233 14:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:55.233 14:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:55.233 14:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 83334 00:15:55.233 14:48:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83334 ']' 00:15:55.233 14:48:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 83334 00:15:55.233 14:48:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:55.233 14:48:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:55.233 14:48:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83334 00:15:55.233 14:48:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:55.233 14:48:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:55.233 14:48:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83334' 00:15:55.233 killing process with pid 83334 00:15:55.233 14:48:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 83334 00:15:55.233 Received shutdown signal, test time was about 60.000000 seconds 00:15:55.233 00:15:55.233 Latency(us) 00:15:55.233 [2024-12-09T14:48:33.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:55.233 [2024-12-09T14:48:33.355Z] =================================================================================================================== 00:15:55.233 [2024-12-09T14:48:33.355Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:55.233 [2024-12-09 14:48:33.240376] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:55.233 [2024-12-09 14:48:33.240508] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:55.233 14:48:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 83334 00:15:55.234 [2024-12-09 14:48:33.240600] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:55.234 [2024-12-09 14:48:33.240616] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:55.802 [2024-12-09 14:48:33.630369] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:56.741 14:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:56.741 00:15:56.741 real 0m23.310s 00:15:56.741 user 0m29.968s 00:15:56.741 sys 0m2.620s 00:15:56.741 ************************************ 00:15:56.741 END TEST raid5f_rebuild_test_sb 00:15:56.741 ************************************ 00:15:56.741 14:48:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:56.741 14:48:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.741 14:48:34 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:56.741 14:48:34 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:15:56.741 14:48:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:56.741 14:48:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:56.741 14:48:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:56.741 ************************************ 00:15:56.741 START TEST raid5f_state_function_test 00:15:56.741 ************************************ 00:15:56.741 14:48:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:15:56.741 14:48:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:56.741 14:48:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:56.741 14:48:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:56.741 14:48:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:56.741 14:48:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:56.741 14:48:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:56.741 14:48:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:56.741 14:48:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:56.741 14:48:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:56.741 14:48:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:56.741 14:48:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:56.741 14:48:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:56.741 14:48:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:56.741 14:48:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:56.741 14:48:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:56.741 14:48:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:56.741 14:48:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:56.741 14:48:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:56.741 14:48:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:56.741 14:48:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:56.741 14:48:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:56.741 14:48:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:56.741 14:48:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:56.741 14:48:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:56.741 14:48:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:56.741 14:48:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:56.741 14:48:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:56.741 14:48:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:56.741 14:48:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:56.741 14:48:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=84082 00:15:56.741 14:48:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:56.741 14:48:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84082' 00:15:56.741 Process raid pid: 84082 00:15:56.741 14:48:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 84082 00:15:56.741 14:48:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 84082 ']' 00:15:56.741 14:48:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.741 14:48:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:56.741 14:48:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.741 14:48:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:56.741 14:48:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.001 [2024-12-09 14:48:34.898472] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:15:57.001 [2024-12-09 14:48:34.898703] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.001 [2024-12-09 14:48:35.092188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.260 [2024-12-09 14:48:35.205385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.519 [2024-12-09 14:48:35.402185] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:57.519 [2024-12-09 14:48:35.402225] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:57.778 14:48:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:57.778 14:48:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:57.779 14:48:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:57.779 14:48:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.779 14:48:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.779 [2024-12-09 14:48:35.782644] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:57.779 [2024-12-09 14:48:35.782706] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:57.779 [2024-12-09 14:48:35.782717] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:57.779 [2024-12-09 14:48:35.782727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:57.779 [2024-12-09 14:48:35.782733] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:57.779 [2024-12-09 14:48:35.782741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:57.779 [2024-12-09 14:48:35.782748] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:57.779 [2024-12-09 14:48:35.782756] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:57.779 14:48:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.779 14:48:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:57.779 14:48:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.779 14:48:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:57.779 14:48:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.779 14:48:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.779 14:48:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:57.779 14:48:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.779 14:48:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.779 14:48:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.779 14:48:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.779 14:48:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.779 14:48:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.779 14:48:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.779 14:48:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.779 14:48:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.779 14:48:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.779 "name": "Existed_Raid", 00:15:57.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.779 "strip_size_kb": 64, 00:15:57.779 "state": "configuring", 00:15:57.779 "raid_level": "raid5f", 00:15:57.779 "superblock": false, 00:15:57.779 "num_base_bdevs": 4, 00:15:57.779 "num_base_bdevs_discovered": 0, 00:15:57.779 "num_base_bdevs_operational": 4, 00:15:57.779 "base_bdevs_list": [ 00:15:57.779 { 00:15:57.779 "name": "BaseBdev1", 00:15:57.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.779 "is_configured": false, 00:15:57.779 "data_offset": 0, 00:15:57.779 "data_size": 0 00:15:57.779 }, 00:15:57.779 { 00:15:57.779 "name": "BaseBdev2", 00:15:57.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.779 "is_configured": false, 00:15:57.779 "data_offset": 0, 00:15:57.779 "data_size": 0 00:15:57.779 }, 00:15:57.779 { 00:15:57.779 "name": "BaseBdev3", 00:15:57.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.779 "is_configured": false, 00:15:57.779 "data_offset": 0, 00:15:57.779 "data_size": 0 00:15:57.779 }, 00:15:57.779 { 00:15:57.779 "name": "BaseBdev4", 00:15:57.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.779 "is_configured": false, 00:15:57.779 "data_offset": 0, 00:15:57.779 "data_size": 0 00:15:57.779 } 00:15:57.779 ] 00:15:57.779 }' 00:15:57.779 14:48:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.779 14:48:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.348 14:48:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:58.348 14:48:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.348 14:48:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.348 [2024-12-09 14:48:36.245778] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:58.348 [2024-12-09 14:48:36.245868] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.349 [2024-12-09 14:48:36.257747] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:58.349 [2024-12-09 14:48:36.257825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:58.349 [2024-12-09 14:48:36.257853] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:58.349 [2024-12-09 14:48:36.257875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:58.349 [2024-12-09 14:48:36.257893] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:58.349 [2024-12-09 14:48:36.257913] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:58.349 [2024-12-09 14:48:36.257930] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:58.349 [2024-12-09 14:48:36.257950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.349 [2024-12-09 14:48:36.304046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:58.349 BaseBdev1 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.349 [ 00:15:58.349 { 00:15:58.349 "name": "BaseBdev1", 00:15:58.349 "aliases": [ 00:15:58.349 "9ccee8e7-bf1a-47b8-8126-df38ea48448a" 00:15:58.349 ], 00:15:58.349 "product_name": "Malloc disk", 00:15:58.349 "block_size": 512, 00:15:58.349 "num_blocks": 65536, 00:15:58.349 "uuid": "9ccee8e7-bf1a-47b8-8126-df38ea48448a", 00:15:58.349 "assigned_rate_limits": { 00:15:58.349 "rw_ios_per_sec": 0, 00:15:58.349 "rw_mbytes_per_sec": 0, 00:15:58.349 "r_mbytes_per_sec": 0, 00:15:58.349 "w_mbytes_per_sec": 0 00:15:58.349 }, 00:15:58.349 "claimed": true, 00:15:58.349 "claim_type": "exclusive_write", 00:15:58.349 "zoned": false, 00:15:58.349 "supported_io_types": { 00:15:58.349 "read": true, 00:15:58.349 "write": true, 00:15:58.349 "unmap": true, 00:15:58.349 "flush": true, 00:15:58.349 "reset": true, 00:15:58.349 "nvme_admin": false, 00:15:58.349 "nvme_io": false, 00:15:58.349 "nvme_io_md": false, 00:15:58.349 "write_zeroes": true, 00:15:58.349 "zcopy": true, 00:15:58.349 "get_zone_info": false, 00:15:58.349 "zone_management": false, 00:15:58.349 "zone_append": false, 00:15:58.349 "compare": false, 00:15:58.349 "compare_and_write": false, 00:15:58.349 "abort": true, 00:15:58.349 "seek_hole": false, 00:15:58.349 "seek_data": false, 00:15:58.349 "copy": true, 00:15:58.349 "nvme_iov_md": false 00:15:58.349 }, 00:15:58.349 "memory_domains": [ 00:15:58.349 { 00:15:58.349 "dma_device_id": "system", 00:15:58.349 "dma_device_type": 1 00:15:58.349 }, 00:15:58.349 { 00:15:58.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.349 "dma_device_type": 2 00:15:58.349 } 00:15:58.349 ], 00:15:58.349 "driver_specific": {} 00:15:58.349 } 00:15:58.349 ] 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.349 "name": "Existed_Raid", 00:15:58.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.349 "strip_size_kb": 64, 00:15:58.349 "state": "configuring", 00:15:58.349 "raid_level": "raid5f", 00:15:58.349 "superblock": false, 00:15:58.349 "num_base_bdevs": 4, 00:15:58.349 "num_base_bdevs_discovered": 1, 00:15:58.349 "num_base_bdevs_operational": 4, 00:15:58.349 "base_bdevs_list": [ 00:15:58.349 { 00:15:58.349 "name": "BaseBdev1", 00:15:58.349 "uuid": "9ccee8e7-bf1a-47b8-8126-df38ea48448a", 00:15:58.349 "is_configured": true, 00:15:58.349 "data_offset": 0, 00:15:58.349 "data_size": 65536 00:15:58.349 }, 00:15:58.349 { 00:15:58.349 "name": "BaseBdev2", 00:15:58.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.349 "is_configured": false, 00:15:58.349 "data_offset": 0, 00:15:58.349 "data_size": 0 00:15:58.349 }, 00:15:58.349 { 00:15:58.349 "name": "BaseBdev3", 00:15:58.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.349 "is_configured": false, 00:15:58.349 "data_offset": 0, 00:15:58.349 "data_size": 0 00:15:58.349 }, 00:15:58.349 { 00:15:58.349 "name": "BaseBdev4", 00:15:58.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.349 "is_configured": false, 00:15:58.349 "data_offset": 0, 00:15:58.349 "data_size": 0 00:15:58.349 } 00:15:58.349 ] 00:15:58.349 }' 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.349 14:48:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.921 14:48:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:58.921 14:48:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.921 14:48:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.921 [2024-12-09 14:48:36.819309] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:58.921 [2024-12-09 14:48:36.819372] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:58.921 14:48:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.921 14:48:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:58.921 14:48:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.921 14:48:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.921 [2024-12-09 14:48:36.831329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:58.921 [2024-12-09 14:48:36.833342] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:58.921 [2024-12-09 14:48:36.833389] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:58.921 [2024-12-09 14:48:36.833400] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:58.921 [2024-12-09 14:48:36.833411] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:58.921 [2024-12-09 14:48:36.833417] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:58.921 [2024-12-09 14:48:36.833425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:58.921 14:48:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.921 14:48:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:58.921 14:48:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:58.921 14:48:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:58.921 14:48:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:58.921 14:48:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:58.921 14:48:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.921 14:48:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.921 14:48:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:58.921 14:48:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.921 14:48:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.921 14:48:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.921 14:48:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.921 14:48:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.921 14:48:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.921 14:48:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.921 14:48:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.921 14:48:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.921 14:48:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.921 "name": "Existed_Raid", 00:15:58.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.921 "strip_size_kb": 64, 00:15:58.921 "state": "configuring", 00:15:58.921 "raid_level": "raid5f", 00:15:58.921 "superblock": false, 00:15:58.921 "num_base_bdevs": 4, 00:15:58.921 "num_base_bdevs_discovered": 1, 00:15:58.921 "num_base_bdevs_operational": 4, 00:15:58.921 "base_bdevs_list": [ 00:15:58.921 { 00:15:58.921 "name": "BaseBdev1", 00:15:58.921 "uuid": "9ccee8e7-bf1a-47b8-8126-df38ea48448a", 00:15:58.921 "is_configured": true, 00:15:58.921 "data_offset": 0, 00:15:58.921 "data_size": 65536 00:15:58.921 }, 00:15:58.921 { 00:15:58.921 "name": "BaseBdev2", 00:15:58.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.921 "is_configured": false, 00:15:58.921 "data_offset": 0, 00:15:58.921 "data_size": 0 00:15:58.921 }, 00:15:58.921 { 00:15:58.921 "name": "BaseBdev3", 00:15:58.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.921 "is_configured": false, 00:15:58.921 "data_offset": 0, 00:15:58.921 "data_size": 0 00:15:58.921 }, 00:15:58.921 { 00:15:58.921 "name": "BaseBdev4", 00:15:58.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.921 "is_configured": false, 00:15:58.921 "data_offset": 0, 00:15:58.921 "data_size": 0 00:15:58.921 } 00:15:58.921 ] 00:15:58.921 }' 00:15:58.921 14:48:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.921 14:48:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.184 14:48:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:59.184 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.184 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.184 [2024-12-09 14:48:37.252819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:59.184 BaseBdev2 00:15:59.184 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.184 14:48:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:59.184 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:59.184 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:59.184 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:59.184 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:59.184 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:59.184 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:59.184 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.184 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.184 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.184 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:59.184 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.184 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.184 [ 00:15:59.184 { 00:15:59.184 "name": "BaseBdev2", 00:15:59.184 "aliases": [ 00:15:59.184 "efe3ef2c-8337-4a57-865d-ae64ab34e2af" 00:15:59.184 ], 00:15:59.184 "product_name": "Malloc disk", 00:15:59.184 "block_size": 512, 00:15:59.184 "num_blocks": 65536, 00:15:59.184 "uuid": "efe3ef2c-8337-4a57-865d-ae64ab34e2af", 00:15:59.184 "assigned_rate_limits": { 00:15:59.184 "rw_ios_per_sec": 0, 00:15:59.184 "rw_mbytes_per_sec": 0, 00:15:59.184 "r_mbytes_per_sec": 0, 00:15:59.184 "w_mbytes_per_sec": 0 00:15:59.184 }, 00:15:59.184 "claimed": true, 00:15:59.184 "claim_type": "exclusive_write", 00:15:59.184 "zoned": false, 00:15:59.184 "supported_io_types": { 00:15:59.184 "read": true, 00:15:59.184 "write": true, 00:15:59.184 "unmap": true, 00:15:59.184 "flush": true, 00:15:59.184 "reset": true, 00:15:59.184 "nvme_admin": false, 00:15:59.184 "nvme_io": false, 00:15:59.184 "nvme_io_md": false, 00:15:59.184 "write_zeroes": true, 00:15:59.184 "zcopy": true, 00:15:59.184 "get_zone_info": false, 00:15:59.184 "zone_management": false, 00:15:59.184 "zone_append": false, 00:15:59.184 "compare": false, 00:15:59.184 "compare_and_write": false, 00:15:59.184 "abort": true, 00:15:59.184 "seek_hole": false, 00:15:59.184 "seek_data": false, 00:15:59.184 "copy": true, 00:15:59.184 "nvme_iov_md": false 00:15:59.184 }, 00:15:59.184 "memory_domains": [ 00:15:59.184 { 00:15:59.184 "dma_device_id": "system", 00:15:59.184 "dma_device_type": 1 00:15:59.184 }, 00:15:59.184 { 00:15:59.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.184 "dma_device_type": 2 00:15:59.184 } 00:15:59.184 ], 00:15:59.184 "driver_specific": {} 00:15:59.184 } 00:15:59.184 ] 00:15:59.184 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.184 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:59.184 14:48:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:59.184 14:48:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:59.184 14:48:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:59.184 14:48:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:59.184 14:48:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:59.184 14:48:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:59.184 14:48:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.184 14:48:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:59.184 14:48:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.184 14:48:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.184 14:48:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.184 14:48:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.184 14:48:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.184 14:48:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.184 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.444 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.444 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.444 14:48:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.444 "name": "Existed_Raid", 00:15:59.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.444 "strip_size_kb": 64, 00:15:59.444 "state": "configuring", 00:15:59.444 "raid_level": "raid5f", 00:15:59.444 "superblock": false, 00:15:59.444 "num_base_bdevs": 4, 00:15:59.444 "num_base_bdevs_discovered": 2, 00:15:59.444 "num_base_bdevs_operational": 4, 00:15:59.444 "base_bdevs_list": [ 00:15:59.444 { 00:15:59.444 "name": "BaseBdev1", 00:15:59.444 "uuid": "9ccee8e7-bf1a-47b8-8126-df38ea48448a", 00:15:59.444 "is_configured": true, 00:15:59.444 "data_offset": 0, 00:15:59.444 "data_size": 65536 00:15:59.444 }, 00:15:59.444 { 00:15:59.444 "name": "BaseBdev2", 00:15:59.444 "uuid": "efe3ef2c-8337-4a57-865d-ae64ab34e2af", 00:15:59.444 "is_configured": true, 00:15:59.444 "data_offset": 0, 00:15:59.444 "data_size": 65536 00:15:59.444 }, 00:15:59.444 { 00:15:59.444 "name": "BaseBdev3", 00:15:59.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.444 "is_configured": false, 00:15:59.444 "data_offset": 0, 00:15:59.444 "data_size": 0 00:15:59.444 }, 00:15:59.444 { 00:15:59.444 "name": "BaseBdev4", 00:15:59.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.444 "is_configured": false, 00:15:59.444 "data_offset": 0, 00:15:59.444 "data_size": 0 00:15:59.444 } 00:15:59.444 ] 00:15:59.444 }' 00:15:59.444 14:48:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.444 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.704 14:48:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:59.704 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.704 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.704 [2024-12-09 14:48:37.813663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:59.704 BaseBdev3 00:15:59.704 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.704 14:48:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:59.704 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:59.704 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:59.704 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:59.704 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:59.704 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:59.704 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:59.704 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.704 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.963 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.963 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:59.963 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.963 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.963 [ 00:15:59.963 { 00:15:59.963 "name": "BaseBdev3", 00:15:59.963 "aliases": [ 00:15:59.963 "2a420e87-ebbe-4823-9eba-5e8baaf7273e" 00:15:59.963 ], 00:15:59.963 "product_name": "Malloc disk", 00:15:59.963 "block_size": 512, 00:15:59.963 "num_blocks": 65536, 00:15:59.963 "uuid": "2a420e87-ebbe-4823-9eba-5e8baaf7273e", 00:15:59.963 "assigned_rate_limits": { 00:15:59.963 "rw_ios_per_sec": 0, 00:15:59.963 "rw_mbytes_per_sec": 0, 00:15:59.963 "r_mbytes_per_sec": 0, 00:15:59.963 "w_mbytes_per_sec": 0 00:15:59.963 }, 00:15:59.963 "claimed": true, 00:15:59.963 "claim_type": "exclusive_write", 00:15:59.963 "zoned": false, 00:15:59.963 "supported_io_types": { 00:15:59.963 "read": true, 00:15:59.963 "write": true, 00:15:59.963 "unmap": true, 00:15:59.963 "flush": true, 00:15:59.963 "reset": true, 00:15:59.963 "nvme_admin": false, 00:15:59.963 "nvme_io": false, 00:15:59.963 "nvme_io_md": false, 00:15:59.963 "write_zeroes": true, 00:15:59.963 "zcopy": true, 00:15:59.963 "get_zone_info": false, 00:15:59.963 "zone_management": false, 00:15:59.963 "zone_append": false, 00:15:59.963 "compare": false, 00:15:59.963 "compare_and_write": false, 00:15:59.963 "abort": true, 00:15:59.963 "seek_hole": false, 00:15:59.963 "seek_data": false, 00:15:59.963 "copy": true, 00:15:59.963 "nvme_iov_md": false 00:15:59.963 }, 00:15:59.963 "memory_domains": [ 00:15:59.963 { 00:15:59.963 "dma_device_id": "system", 00:15:59.963 "dma_device_type": 1 00:15:59.963 }, 00:15:59.963 { 00:15:59.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.963 "dma_device_type": 2 00:15:59.963 } 00:15:59.963 ], 00:15:59.963 "driver_specific": {} 00:15:59.963 } 00:15:59.963 ] 00:15:59.963 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.963 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:59.963 14:48:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:59.963 14:48:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:59.963 14:48:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:59.963 14:48:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:59.963 14:48:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:59.963 14:48:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:59.963 14:48:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.963 14:48:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:59.963 14:48:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.963 14:48:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.963 14:48:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.963 14:48:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.963 14:48:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.963 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.963 14:48:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.963 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.963 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.963 14:48:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.963 "name": "Existed_Raid", 00:15:59.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.963 "strip_size_kb": 64, 00:15:59.963 "state": "configuring", 00:15:59.963 "raid_level": "raid5f", 00:15:59.963 "superblock": false, 00:15:59.963 "num_base_bdevs": 4, 00:15:59.963 "num_base_bdevs_discovered": 3, 00:15:59.963 "num_base_bdevs_operational": 4, 00:15:59.963 "base_bdevs_list": [ 00:15:59.963 { 00:15:59.963 "name": "BaseBdev1", 00:15:59.963 "uuid": "9ccee8e7-bf1a-47b8-8126-df38ea48448a", 00:15:59.963 "is_configured": true, 00:15:59.963 "data_offset": 0, 00:15:59.963 "data_size": 65536 00:15:59.963 }, 00:15:59.963 { 00:15:59.963 "name": "BaseBdev2", 00:15:59.963 "uuid": "efe3ef2c-8337-4a57-865d-ae64ab34e2af", 00:15:59.963 "is_configured": true, 00:15:59.963 "data_offset": 0, 00:15:59.963 "data_size": 65536 00:15:59.963 }, 00:15:59.963 { 00:15:59.963 "name": "BaseBdev3", 00:15:59.963 "uuid": "2a420e87-ebbe-4823-9eba-5e8baaf7273e", 00:15:59.963 "is_configured": true, 00:15:59.963 "data_offset": 0, 00:15:59.963 "data_size": 65536 00:15:59.963 }, 00:15:59.963 { 00:15:59.963 "name": "BaseBdev4", 00:15:59.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.963 "is_configured": false, 00:15:59.963 "data_offset": 0, 00:15:59.963 "data_size": 0 00:15:59.963 } 00:15:59.963 ] 00:15:59.963 }' 00:15:59.963 14:48:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.963 14:48:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.222 14:48:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:00.222 14:48:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.222 14:48:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.222 [2024-12-09 14:48:38.336086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:00.222 [2024-12-09 14:48:38.336154] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:00.222 [2024-12-09 14:48:38.336164] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:00.222 [2024-12-09 14:48:38.336422] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:00.222 [2024-12-09 14:48:38.343371] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:00.222 [2024-12-09 14:48:38.343442] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:00.223 [2024-12-09 14:48:38.343766] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.482 BaseBdev4 00:16:00.482 14:48:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.482 14:48:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:00.482 14:48:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:00.482 14:48:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:00.482 14:48:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:00.482 14:48:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:00.482 14:48:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:00.482 14:48:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:00.482 14:48:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.482 14:48:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.482 14:48:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.482 14:48:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:00.482 14:48:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.482 14:48:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.482 [ 00:16:00.482 { 00:16:00.482 "name": "BaseBdev4", 00:16:00.482 "aliases": [ 00:16:00.482 "e51470b4-9a4f-4006-8cbe-30f8180c6ead" 00:16:00.482 ], 00:16:00.482 "product_name": "Malloc disk", 00:16:00.482 "block_size": 512, 00:16:00.482 "num_blocks": 65536, 00:16:00.482 "uuid": "e51470b4-9a4f-4006-8cbe-30f8180c6ead", 00:16:00.482 "assigned_rate_limits": { 00:16:00.482 "rw_ios_per_sec": 0, 00:16:00.482 "rw_mbytes_per_sec": 0, 00:16:00.482 "r_mbytes_per_sec": 0, 00:16:00.482 "w_mbytes_per_sec": 0 00:16:00.482 }, 00:16:00.482 "claimed": true, 00:16:00.482 "claim_type": "exclusive_write", 00:16:00.482 "zoned": false, 00:16:00.482 "supported_io_types": { 00:16:00.482 "read": true, 00:16:00.482 "write": true, 00:16:00.482 "unmap": true, 00:16:00.482 "flush": true, 00:16:00.482 "reset": true, 00:16:00.482 "nvme_admin": false, 00:16:00.482 "nvme_io": false, 00:16:00.482 "nvme_io_md": false, 00:16:00.482 "write_zeroes": true, 00:16:00.482 "zcopy": true, 00:16:00.482 "get_zone_info": false, 00:16:00.482 "zone_management": false, 00:16:00.482 "zone_append": false, 00:16:00.482 "compare": false, 00:16:00.482 "compare_and_write": false, 00:16:00.482 "abort": true, 00:16:00.482 "seek_hole": false, 00:16:00.482 "seek_data": false, 00:16:00.482 "copy": true, 00:16:00.482 "nvme_iov_md": false 00:16:00.482 }, 00:16:00.482 "memory_domains": [ 00:16:00.482 { 00:16:00.482 "dma_device_id": "system", 00:16:00.482 "dma_device_type": 1 00:16:00.483 }, 00:16:00.483 { 00:16:00.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.483 "dma_device_type": 2 00:16:00.483 } 00:16:00.483 ], 00:16:00.483 "driver_specific": {} 00:16:00.483 } 00:16:00.483 ] 00:16:00.483 14:48:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.483 14:48:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:00.483 14:48:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:00.483 14:48:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:00.483 14:48:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:00.483 14:48:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:00.483 14:48:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.483 14:48:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:00.483 14:48:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.483 14:48:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:00.483 14:48:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.483 14:48:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.483 14:48:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.483 14:48:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.483 14:48:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.483 14:48:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.483 14:48:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.483 14:48:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.483 14:48:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.483 14:48:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.483 "name": "Existed_Raid", 00:16:00.483 "uuid": "97d6d505-6969-4897-b094-9a3f300603a9", 00:16:00.483 "strip_size_kb": 64, 00:16:00.483 "state": "online", 00:16:00.483 "raid_level": "raid5f", 00:16:00.483 "superblock": false, 00:16:00.483 "num_base_bdevs": 4, 00:16:00.483 "num_base_bdevs_discovered": 4, 00:16:00.483 "num_base_bdevs_operational": 4, 00:16:00.483 "base_bdevs_list": [ 00:16:00.483 { 00:16:00.483 "name": "BaseBdev1", 00:16:00.483 "uuid": "9ccee8e7-bf1a-47b8-8126-df38ea48448a", 00:16:00.483 "is_configured": true, 00:16:00.483 "data_offset": 0, 00:16:00.483 "data_size": 65536 00:16:00.483 }, 00:16:00.483 { 00:16:00.483 "name": "BaseBdev2", 00:16:00.483 "uuid": "efe3ef2c-8337-4a57-865d-ae64ab34e2af", 00:16:00.483 "is_configured": true, 00:16:00.483 "data_offset": 0, 00:16:00.483 "data_size": 65536 00:16:00.483 }, 00:16:00.483 { 00:16:00.483 "name": "BaseBdev3", 00:16:00.483 "uuid": "2a420e87-ebbe-4823-9eba-5e8baaf7273e", 00:16:00.483 "is_configured": true, 00:16:00.483 "data_offset": 0, 00:16:00.483 "data_size": 65536 00:16:00.483 }, 00:16:00.483 { 00:16:00.483 "name": "BaseBdev4", 00:16:00.483 "uuid": "e51470b4-9a4f-4006-8cbe-30f8180c6ead", 00:16:00.483 "is_configured": true, 00:16:00.483 "data_offset": 0, 00:16:00.483 "data_size": 65536 00:16:00.483 } 00:16:00.483 ] 00:16:00.483 }' 00:16:00.483 14:48:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.483 14:48:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.051 14:48:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:01.051 14:48:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:01.051 14:48:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:01.051 14:48:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:01.051 14:48:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:01.051 14:48:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:01.051 14:48:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:01.051 14:48:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:01.051 14:48:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.051 14:48:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.051 [2024-12-09 14:48:38.883472] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:01.051 14:48:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.051 14:48:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:01.051 "name": "Existed_Raid", 00:16:01.051 "aliases": [ 00:16:01.051 "97d6d505-6969-4897-b094-9a3f300603a9" 00:16:01.051 ], 00:16:01.051 "product_name": "Raid Volume", 00:16:01.051 "block_size": 512, 00:16:01.051 "num_blocks": 196608, 00:16:01.051 "uuid": "97d6d505-6969-4897-b094-9a3f300603a9", 00:16:01.051 "assigned_rate_limits": { 00:16:01.051 "rw_ios_per_sec": 0, 00:16:01.051 "rw_mbytes_per_sec": 0, 00:16:01.051 "r_mbytes_per_sec": 0, 00:16:01.051 "w_mbytes_per_sec": 0 00:16:01.051 }, 00:16:01.051 "claimed": false, 00:16:01.051 "zoned": false, 00:16:01.051 "supported_io_types": { 00:16:01.051 "read": true, 00:16:01.051 "write": true, 00:16:01.051 "unmap": false, 00:16:01.051 "flush": false, 00:16:01.051 "reset": true, 00:16:01.051 "nvme_admin": false, 00:16:01.051 "nvme_io": false, 00:16:01.051 "nvme_io_md": false, 00:16:01.051 "write_zeroes": true, 00:16:01.051 "zcopy": false, 00:16:01.051 "get_zone_info": false, 00:16:01.051 "zone_management": false, 00:16:01.051 "zone_append": false, 00:16:01.051 "compare": false, 00:16:01.051 "compare_and_write": false, 00:16:01.051 "abort": false, 00:16:01.051 "seek_hole": false, 00:16:01.051 "seek_data": false, 00:16:01.051 "copy": false, 00:16:01.051 "nvme_iov_md": false 00:16:01.051 }, 00:16:01.051 "driver_specific": { 00:16:01.051 "raid": { 00:16:01.051 "uuid": "97d6d505-6969-4897-b094-9a3f300603a9", 00:16:01.051 "strip_size_kb": 64, 00:16:01.051 "state": "online", 00:16:01.051 "raid_level": "raid5f", 00:16:01.051 "superblock": false, 00:16:01.051 "num_base_bdevs": 4, 00:16:01.051 "num_base_bdevs_discovered": 4, 00:16:01.051 "num_base_bdevs_operational": 4, 00:16:01.051 "base_bdevs_list": [ 00:16:01.051 { 00:16:01.051 "name": "BaseBdev1", 00:16:01.051 "uuid": "9ccee8e7-bf1a-47b8-8126-df38ea48448a", 00:16:01.051 "is_configured": true, 00:16:01.051 "data_offset": 0, 00:16:01.051 "data_size": 65536 00:16:01.051 }, 00:16:01.051 { 00:16:01.051 "name": "BaseBdev2", 00:16:01.051 "uuid": "efe3ef2c-8337-4a57-865d-ae64ab34e2af", 00:16:01.051 "is_configured": true, 00:16:01.051 "data_offset": 0, 00:16:01.051 "data_size": 65536 00:16:01.051 }, 00:16:01.051 { 00:16:01.051 "name": "BaseBdev3", 00:16:01.051 "uuid": "2a420e87-ebbe-4823-9eba-5e8baaf7273e", 00:16:01.051 "is_configured": true, 00:16:01.051 "data_offset": 0, 00:16:01.051 "data_size": 65536 00:16:01.051 }, 00:16:01.051 { 00:16:01.051 "name": "BaseBdev4", 00:16:01.051 "uuid": "e51470b4-9a4f-4006-8cbe-30f8180c6ead", 00:16:01.051 "is_configured": true, 00:16:01.051 "data_offset": 0, 00:16:01.051 "data_size": 65536 00:16:01.051 } 00:16:01.051 ] 00:16:01.051 } 00:16:01.051 } 00:16:01.051 }' 00:16:01.051 14:48:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:01.051 14:48:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:01.051 BaseBdev2 00:16:01.051 BaseBdev3 00:16:01.051 BaseBdev4' 00:16:01.051 14:48:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:01.051 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:01.051 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:01.051 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:01.051 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:01.051 14:48:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.051 14:48:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.051 14:48:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.051 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:01.051 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:01.051 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:01.051 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:01.051 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:01.051 14:48:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.051 14:48:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.051 14:48:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.051 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:01.051 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:01.051 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:01.051 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:01.051 14:48:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.051 14:48:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.051 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:01.051 14:48:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.051 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:01.311 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:01.311 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:01.311 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:01.311 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:01.311 14:48:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.311 14:48:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.311 14:48:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.311 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:01.311 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:01.311 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:01.311 14:48:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.311 14:48:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.311 [2024-12-09 14:48:39.230693] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:01.311 14:48:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.311 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:01.311 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:01.311 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:01.311 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:01.311 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:01.311 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:01.311 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:01.311 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.311 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:01.311 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.311 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:01.311 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.311 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.311 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.311 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.311 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.311 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.311 14:48:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.311 14:48:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.311 14:48:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.311 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.311 "name": "Existed_Raid", 00:16:01.311 "uuid": "97d6d505-6969-4897-b094-9a3f300603a9", 00:16:01.311 "strip_size_kb": 64, 00:16:01.311 "state": "online", 00:16:01.311 "raid_level": "raid5f", 00:16:01.311 "superblock": false, 00:16:01.311 "num_base_bdevs": 4, 00:16:01.311 "num_base_bdevs_discovered": 3, 00:16:01.311 "num_base_bdevs_operational": 3, 00:16:01.311 "base_bdevs_list": [ 00:16:01.311 { 00:16:01.311 "name": null, 00:16:01.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.311 "is_configured": false, 00:16:01.311 "data_offset": 0, 00:16:01.311 "data_size": 65536 00:16:01.311 }, 00:16:01.311 { 00:16:01.311 "name": "BaseBdev2", 00:16:01.311 "uuid": "efe3ef2c-8337-4a57-865d-ae64ab34e2af", 00:16:01.311 "is_configured": true, 00:16:01.311 "data_offset": 0, 00:16:01.311 "data_size": 65536 00:16:01.311 }, 00:16:01.311 { 00:16:01.311 "name": "BaseBdev3", 00:16:01.311 "uuid": "2a420e87-ebbe-4823-9eba-5e8baaf7273e", 00:16:01.311 "is_configured": true, 00:16:01.312 "data_offset": 0, 00:16:01.312 "data_size": 65536 00:16:01.312 }, 00:16:01.312 { 00:16:01.312 "name": "BaseBdev4", 00:16:01.312 "uuid": "e51470b4-9a4f-4006-8cbe-30f8180c6ead", 00:16:01.312 "is_configured": true, 00:16:01.312 "data_offset": 0, 00:16:01.312 "data_size": 65536 00:16:01.312 } 00:16:01.312 ] 00:16:01.312 }' 00:16:01.312 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.312 14:48:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.881 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:01.881 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:01.881 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:01.881 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.881 14:48:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.881 14:48:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.881 14:48:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.881 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:01.881 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:01.881 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:01.881 14:48:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.881 14:48:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.881 [2024-12-09 14:48:39.791877] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:01.881 [2024-12-09 14:48:39.791990] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:01.881 [2024-12-09 14:48:39.887659] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:01.881 14:48:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.881 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:01.881 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:01.881 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.881 14:48:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.881 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:01.881 14:48:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.881 14:48:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.881 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:01.881 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:01.881 14:48:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:01.881 14:48:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.881 14:48:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.881 [2024-12-09 14:48:39.963580] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:02.141 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.141 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:02.141 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:02.141 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.141 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.141 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.141 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:02.141 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.141 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:02.141 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:02.141 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:02.141 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.141 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.141 [2024-12-09 14:48:40.116291] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:02.141 [2024-12-09 14:48:40.116343] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:02.141 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.141 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:02.141 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:02.141 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.141 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.141 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.141 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:02.141 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.401 BaseBdev2 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.401 [ 00:16:02.401 { 00:16:02.401 "name": "BaseBdev2", 00:16:02.401 "aliases": [ 00:16:02.401 "fd83f017-7104-4183-80b9-682a51f83c83" 00:16:02.401 ], 00:16:02.401 "product_name": "Malloc disk", 00:16:02.401 "block_size": 512, 00:16:02.401 "num_blocks": 65536, 00:16:02.401 "uuid": "fd83f017-7104-4183-80b9-682a51f83c83", 00:16:02.401 "assigned_rate_limits": { 00:16:02.401 "rw_ios_per_sec": 0, 00:16:02.401 "rw_mbytes_per_sec": 0, 00:16:02.401 "r_mbytes_per_sec": 0, 00:16:02.401 "w_mbytes_per_sec": 0 00:16:02.401 }, 00:16:02.401 "claimed": false, 00:16:02.401 "zoned": false, 00:16:02.401 "supported_io_types": { 00:16:02.401 "read": true, 00:16:02.401 "write": true, 00:16:02.401 "unmap": true, 00:16:02.401 "flush": true, 00:16:02.401 "reset": true, 00:16:02.401 "nvme_admin": false, 00:16:02.401 "nvme_io": false, 00:16:02.401 "nvme_io_md": false, 00:16:02.401 "write_zeroes": true, 00:16:02.401 "zcopy": true, 00:16:02.401 "get_zone_info": false, 00:16:02.401 "zone_management": false, 00:16:02.401 "zone_append": false, 00:16:02.401 "compare": false, 00:16:02.401 "compare_and_write": false, 00:16:02.401 "abort": true, 00:16:02.401 "seek_hole": false, 00:16:02.401 "seek_data": false, 00:16:02.401 "copy": true, 00:16:02.401 "nvme_iov_md": false 00:16:02.401 }, 00:16:02.401 "memory_domains": [ 00:16:02.401 { 00:16:02.401 "dma_device_id": "system", 00:16:02.401 "dma_device_type": 1 00:16:02.401 }, 00:16:02.401 { 00:16:02.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.401 "dma_device_type": 2 00:16:02.401 } 00:16:02.401 ], 00:16:02.401 "driver_specific": {} 00:16:02.401 } 00:16:02.401 ] 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.401 BaseBdev3 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.401 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.401 [ 00:16:02.401 { 00:16:02.401 "name": "BaseBdev3", 00:16:02.401 "aliases": [ 00:16:02.401 "ed8620d7-3d7b-404f-9d8c-588cd27cdfb4" 00:16:02.401 ], 00:16:02.401 "product_name": "Malloc disk", 00:16:02.401 "block_size": 512, 00:16:02.401 "num_blocks": 65536, 00:16:02.401 "uuid": "ed8620d7-3d7b-404f-9d8c-588cd27cdfb4", 00:16:02.401 "assigned_rate_limits": { 00:16:02.401 "rw_ios_per_sec": 0, 00:16:02.401 "rw_mbytes_per_sec": 0, 00:16:02.401 "r_mbytes_per_sec": 0, 00:16:02.401 "w_mbytes_per_sec": 0 00:16:02.401 }, 00:16:02.401 "claimed": false, 00:16:02.401 "zoned": false, 00:16:02.401 "supported_io_types": { 00:16:02.401 "read": true, 00:16:02.401 "write": true, 00:16:02.401 "unmap": true, 00:16:02.401 "flush": true, 00:16:02.401 "reset": true, 00:16:02.401 "nvme_admin": false, 00:16:02.401 "nvme_io": false, 00:16:02.401 "nvme_io_md": false, 00:16:02.401 "write_zeroes": true, 00:16:02.401 "zcopy": true, 00:16:02.402 "get_zone_info": false, 00:16:02.402 "zone_management": false, 00:16:02.402 "zone_append": false, 00:16:02.402 "compare": false, 00:16:02.402 "compare_and_write": false, 00:16:02.402 "abort": true, 00:16:02.402 "seek_hole": false, 00:16:02.402 "seek_data": false, 00:16:02.402 "copy": true, 00:16:02.402 "nvme_iov_md": false 00:16:02.402 }, 00:16:02.402 "memory_domains": [ 00:16:02.402 { 00:16:02.402 "dma_device_id": "system", 00:16:02.402 "dma_device_type": 1 00:16:02.402 }, 00:16:02.402 { 00:16:02.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.402 "dma_device_type": 2 00:16:02.402 } 00:16:02.402 ], 00:16:02.402 "driver_specific": {} 00:16:02.402 } 00:16:02.402 ] 00:16:02.402 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.402 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:02.402 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:02.402 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:02.402 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:02.402 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.402 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.402 BaseBdev4 00:16:02.402 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.402 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:02.402 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:02.402 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:02.402 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:02.402 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:02.402 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:02.402 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:02.402 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.402 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.402 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.402 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:02.402 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.402 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.402 [ 00:16:02.402 { 00:16:02.402 "name": "BaseBdev4", 00:16:02.402 "aliases": [ 00:16:02.402 "8bdc5947-8ad1-45cf-9865-7345b1fd97cd" 00:16:02.402 ], 00:16:02.402 "product_name": "Malloc disk", 00:16:02.402 "block_size": 512, 00:16:02.402 "num_blocks": 65536, 00:16:02.402 "uuid": "8bdc5947-8ad1-45cf-9865-7345b1fd97cd", 00:16:02.402 "assigned_rate_limits": { 00:16:02.402 "rw_ios_per_sec": 0, 00:16:02.402 "rw_mbytes_per_sec": 0, 00:16:02.402 "r_mbytes_per_sec": 0, 00:16:02.402 "w_mbytes_per_sec": 0 00:16:02.402 }, 00:16:02.402 "claimed": false, 00:16:02.402 "zoned": false, 00:16:02.402 "supported_io_types": { 00:16:02.402 "read": true, 00:16:02.402 "write": true, 00:16:02.402 "unmap": true, 00:16:02.402 "flush": true, 00:16:02.402 "reset": true, 00:16:02.402 "nvme_admin": false, 00:16:02.402 "nvme_io": false, 00:16:02.402 "nvme_io_md": false, 00:16:02.402 "write_zeroes": true, 00:16:02.402 "zcopy": true, 00:16:02.402 "get_zone_info": false, 00:16:02.402 "zone_management": false, 00:16:02.402 "zone_append": false, 00:16:02.402 "compare": false, 00:16:02.402 "compare_and_write": false, 00:16:02.402 "abort": true, 00:16:02.402 "seek_hole": false, 00:16:02.402 "seek_data": false, 00:16:02.402 "copy": true, 00:16:02.402 "nvme_iov_md": false 00:16:02.402 }, 00:16:02.402 "memory_domains": [ 00:16:02.402 { 00:16:02.402 "dma_device_id": "system", 00:16:02.402 "dma_device_type": 1 00:16:02.402 }, 00:16:02.402 { 00:16:02.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.402 "dma_device_type": 2 00:16:02.402 } 00:16:02.402 ], 00:16:02.402 "driver_specific": {} 00:16:02.402 } 00:16:02.402 ] 00:16:02.402 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.402 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:02.402 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:02.402 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:02.402 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:02.402 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.402 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.402 [2024-12-09 14:48:40.514237] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:02.402 [2024-12-09 14:48:40.514321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:02.402 [2024-12-09 14:48:40.514378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:02.402 [2024-12-09 14:48:40.516236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:02.402 [2024-12-09 14:48:40.516333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:02.402 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.402 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:02.402 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:02.402 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:02.402 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:02.402 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.402 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:02.402 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.402 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.662 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.662 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.662 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.662 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.662 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.662 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.662 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.662 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.662 "name": "Existed_Raid", 00:16:02.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.662 "strip_size_kb": 64, 00:16:02.662 "state": "configuring", 00:16:02.662 "raid_level": "raid5f", 00:16:02.662 "superblock": false, 00:16:02.662 "num_base_bdevs": 4, 00:16:02.662 "num_base_bdevs_discovered": 3, 00:16:02.662 "num_base_bdevs_operational": 4, 00:16:02.662 "base_bdevs_list": [ 00:16:02.662 { 00:16:02.662 "name": "BaseBdev1", 00:16:02.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.662 "is_configured": false, 00:16:02.662 "data_offset": 0, 00:16:02.662 "data_size": 0 00:16:02.662 }, 00:16:02.662 { 00:16:02.662 "name": "BaseBdev2", 00:16:02.662 "uuid": "fd83f017-7104-4183-80b9-682a51f83c83", 00:16:02.662 "is_configured": true, 00:16:02.662 "data_offset": 0, 00:16:02.662 "data_size": 65536 00:16:02.662 }, 00:16:02.662 { 00:16:02.662 "name": "BaseBdev3", 00:16:02.662 "uuid": "ed8620d7-3d7b-404f-9d8c-588cd27cdfb4", 00:16:02.662 "is_configured": true, 00:16:02.662 "data_offset": 0, 00:16:02.662 "data_size": 65536 00:16:02.662 }, 00:16:02.662 { 00:16:02.662 "name": "BaseBdev4", 00:16:02.662 "uuid": "8bdc5947-8ad1-45cf-9865-7345b1fd97cd", 00:16:02.662 "is_configured": true, 00:16:02.662 "data_offset": 0, 00:16:02.662 "data_size": 65536 00:16:02.662 } 00:16:02.662 ] 00:16:02.662 }' 00:16:02.662 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.662 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.922 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:02.922 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.922 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.922 [2024-12-09 14:48:40.949511] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:02.922 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.922 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:02.922 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:02.922 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:02.922 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:02.922 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.922 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:02.922 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.922 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.922 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.922 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.922 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.922 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.922 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.922 14:48:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.922 14:48:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.922 14:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.922 "name": "Existed_Raid", 00:16:02.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.922 "strip_size_kb": 64, 00:16:02.922 "state": "configuring", 00:16:02.922 "raid_level": "raid5f", 00:16:02.922 "superblock": false, 00:16:02.922 "num_base_bdevs": 4, 00:16:02.922 "num_base_bdevs_discovered": 2, 00:16:02.922 "num_base_bdevs_operational": 4, 00:16:02.922 "base_bdevs_list": [ 00:16:02.922 { 00:16:02.922 "name": "BaseBdev1", 00:16:02.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.922 "is_configured": false, 00:16:02.922 "data_offset": 0, 00:16:02.922 "data_size": 0 00:16:02.922 }, 00:16:02.922 { 00:16:02.922 "name": null, 00:16:02.922 "uuid": "fd83f017-7104-4183-80b9-682a51f83c83", 00:16:02.922 "is_configured": false, 00:16:02.922 "data_offset": 0, 00:16:02.922 "data_size": 65536 00:16:02.922 }, 00:16:02.922 { 00:16:02.922 "name": "BaseBdev3", 00:16:02.922 "uuid": "ed8620d7-3d7b-404f-9d8c-588cd27cdfb4", 00:16:02.922 "is_configured": true, 00:16:02.922 "data_offset": 0, 00:16:02.922 "data_size": 65536 00:16:02.922 }, 00:16:02.922 { 00:16:02.922 "name": "BaseBdev4", 00:16:02.922 "uuid": "8bdc5947-8ad1-45cf-9865-7345b1fd97cd", 00:16:02.922 "is_configured": true, 00:16:02.922 "data_offset": 0, 00:16:02.922 "data_size": 65536 00:16:02.922 } 00:16:02.922 ] 00:16:02.922 }' 00:16:02.922 14:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.922 14:48:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.491 14:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.491 14:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:03.491 14:48:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.491 14:48:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.491 14:48:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.491 14:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:03.491 14:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:03.491 14:48:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.491 14:48:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.491 [2024-12-09 14:48:41.489057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:03.491 BaseBdev1 00:16:03.491 14:48:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.491 14:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:03.491 14:48:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:03.491 14:48:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:03.491 14:48:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:03.491 14:48:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:03.491 14:48:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:03.491 14:48:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:03.491 14:48:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.491 14:48:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.491 14:48:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.491 14:48:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:03.491 14:48:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.491 14:48:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.491 [ 00:16:03.491 { 00:16:03.491 "name": "BaseBdev1", 00:16:03.491 "aliases": [ 00:16:03.491 "cc31b287-d6ce-4315-bf6f-8d0ef428b933" 00:16:03.491 ], 00:16:03.491 "product_name": "Malloc disk", 00:16:03.491 "block_size": 512, 00:16:03.491 "num_blocks": 65536, 00:16:03.491 "uuid": "cc31b287-d6ce-4315-bf6f-8d0ef428b933", 00:16:03.491 "assigned_rate_limits": { 00:16:03.491 "rw_ios_per_sec": 0, 00:16:03.491 "rw_mbytes_per_sec": 0, 00:16:03.491 "r_mbytes_per_sec": 0, 00:16:03.491 "w_mbytes_per_sec": 0 00:16:03.491 }, 00:16:03.491 "claimed": true, 00:16:03.491 "claim_type": "exclusive_write", 00:16:03.491 "zoned": false, 00:16:03.491 "supported_io_types": { 00:16:03.491 "read": true, 00:16:03.491 "write": true, 00:16:03.491 "unmap": true, 00:16:03.491 "flush": true, 00:16:03.491 "reset": true, 00:16:03.491 "nvme_admin": false, 00:16:03.491 "nvme_io": false, 00:16:03.491 "nvme_io_md": false, 00:16:03.491 "write_zeroes": true, 00:16:03.491 "zcopy": true, 00:16:03.491 "get_zone_info": false, 00:16:03.491 "zone_management": false, 00:16:03.491 "zone_append": false, 00:16:03.491 "compare": false, 00:16:03.491 "compare_and_write": false, 00:16:03.491 "abort": true, 00:16:03.491 "seek_hole": false, 00:16:03.491 "seek_data": false, 00:16:03.491 "copy": true, 00:16:03.492 "nvme_iov_md": false 00:16:03.492 }, 00:16:03.492 "memory_domains": [ 00:16:03.492 { 00:16:03.492 "dma_device_id": "system", 00:16:03.492 "dma_device_type": 1 00:16:03.492 }, 00:16:03.492 { 00:16:03.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.492 "dma_device_type": 2 00:16:03.492 } 00:16:03.492 ], 00:16:03.492 "driver_specific": {} 00:16:03.492 } 00:16:03.492 ] 00:16:03.492 14:48:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.492 14:48:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:03.492 14:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:03.492 14:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:03.492 14:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:03.492 14:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:03.492 14:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.492 14:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:03.492 14:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.492 14:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.492 14:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.492 14:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.492 14:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.492 14:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.492 14:48:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.492 14:48:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.492 14:48:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.492 14:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.492 "name": "Existed_Raid", 00:16:03.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.492 "strip_size_kb": 64, 00:16:03.492 "state": "configuring", 00:16:03.492 "raid_level": "raid5f", 00:16:03.492 "superblock": false, 00:16:03.492 "num_base_bdevs": 4, 00:16:03.492 "num_base_bdevs_discovered": 3, 00:16:03.492 "num_base_bdevs_operational": 4, 00:16:03.492 "base_bdevs_list": [ 00:16:03.492 { 00:16:03.492 "name": "BaseBdev1", 00:16:03.492 "uuid": "cc31b287-d6ce-4315-bf6f-8d0ef428b933", 00:16:03.492 "is_configured": true, 00:16:03.492 "data_offset": 0, 00:16:03.492 "data_size": 65536 00:16:03.492 }, 00:16:03.492 { 00:16:03.492 "name": null, 00:16:03.492 "uuid": "fd83f017-7104-4183-80b9-682a51f83c83", 00:16:03.492 "is_configured": false, 00:16:03.492 "data_offset": 0, 00:16:03.492 "data_size": 65536 00:16:03.492 }, 00:16:03.492 { 00:16:03.492 "name": "BaseBdev3", 00:16:03.492 "uuid": "ed8620d7-3d7b-404f-9d8c-588cd27cdfb4", 00:16:03.492 "is_configured": true, 00:16:03.492 "data_offset": 0, 00:16:03.492 "data_size": 65536 00:16:03.492 }, 00:16:03.492 { 00:16:03.492 "name": "BaseBdev4", 00:16:03.492 "uuid": "8bdc5947-8ad1-45cf-9865-7345b1fd97cd", 00:16:03.492 "is_configured": true, 00:16:03.492 "data_offset": 0, 00:16:03.492 "data_size": 65536 00:16:03.492 } 00:16:03.492 ] 00:16:03.492 }' 00:16:03.492 14:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.492 14:48:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.061 14:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.061 14:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:04.061 14:48:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.061 14:48:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.061 14:48:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.061 14:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:04.061 14:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:04.061 14:48:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.061 14:48:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.061 [2024-12-09 14:48:41.972328] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:04.061 14:48:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.061 14:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:04.061 14:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:04.061 14:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:04.061 14:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:04.061 14:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.061 14:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:04.061 14:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.061 14:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.061 14:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.061 14:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.061 14:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.061 14:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.061 14:48:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.061 14:48:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.061 14:48:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.061 14:48:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.061 "name": "Existed_Raid", 00:16:04.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.061 "strip_size_kb": 64, 00:16:04.061 "state": "configuring", 00:16:04.061 "raid_level": "raid5f", 00:16:04.061 "superblock": false, 00:16:04.061 "num_base_bdevs": 4, 00:16:04.061 "num_base_bdevs_discovered": 2, 00:16:04.061 "num_base_bdevs_operational": 4, 00:16:04.061 "base_bdevs_list": [ 00:16:04.061 { 00:16:04.061 "name": "BaseBdev1", 00:16:04.061 "uuid": "cc31b287-d6ce-4315-bf6f-8d0ef428b933", 00:16:04.061 "is_configured": true, 00:16:04.061 "data_offset": 0, 00:16:04.061 "data_size": 65536 00:16:04.061 }, 00:16:04.061 { 00:16:04.061 "name": null, 00:16:04.061 "uuid": "fd83f017-7104-4183-80b9-682a51f83c83", 00:16:04.061 "is_configured": false, 00:16:04.061 "data_offset": 0, 00:16:04.061 "data_size": 65536 00:16:04.061 }, 00:16:04.061 { 00:16:04.061 "name": null, 00:16:04.061 "uuid": "ed8620d7-3d7b-404f-9d8c-588cd27cdfb4", 00:16:04.061 "is_configured": false, 00:16:04.061 "data_offset": 0, 00:16:04.061 "data_size": 65536 00:16:04.061 }, 00:16:04.061 { 00:16:04.061 "name": "BaseBdev4", 00:16:04.061 "uuid": "8bdc5947-8ad1-45cf-9865-7345b1fd97cd", 00:16:04.061 "is_configured": true, 00:16:04.061 "data_offset": 0, 00:16:04.061 "data_size": 65536 00:16:04.061 } 00:16:04.061 ] 00:16:04.061 }' 00:16:04.061 14:48:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.061 14:48:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.321 14:48:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.321 14:48:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.321 14:48:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.321 14:48:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:04.321 14:48:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.585 14:48:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:04.585 14:48:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:04.585 14:48:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.585 14:48:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.585 [2024-12-09 14:48:42.467490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:04.585 14:48:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.586 14:48:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:04.586 14:48:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:04.586 14:48:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:04.586 14:48:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:04.586 14:48:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.586 14:48:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:04.586 14:48:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.586 14:48:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.586 14:48:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.586 14:48:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.586 14:48:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.586 14:48:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.586 14:48:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.586 14:48:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.586 14:48:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.586 14:48:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.586 "name": "Existed_Raid", 00:16:04.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.586 "strip_size_kb": 64, 00:16:04.586 "state": "configuring", 00:16:04.586 "raid_level": "raid5f", 00:16:04.586 "superblock": false, 00:16:04.586 "num_base_bdevs": 4, 00:16:04.586 "num_base_bdevs_discovered": 3, 00:16:04.586 "num_base_bdevs_operational": 4, 00:16:04.586 "base_bdevs_list": [ 00:16:04.586 { 00:16:04.586 "name": "BaseBdev1", 00:16:04.586 "uuid": "cc31b287-d6ce-4315-bf6f-8d0ef428b933", 00:16:04.586 "is_configured": true, 00:16:04.586 "data_offset": 0, 00:16:04.586 "data_size": 65536 00:16:04.586 }, 00:16:04.586 { 00:16:04.586 "name": null, 00:16:04.586 "uuid": "fd83f017-7104-4183-80b9-682a51f83c83", 00:16:04.586 "is_configured": false, 00:16:04.586 "data_offset": 0, 00:16:04.586 "data_size": 65536 00:16:04.586 }, 00:16:04.586 { 00:16:04.586 "name": "BaseBdev3", 00:16:04.586 "uuid": "ed8620d7-3d7b-404f-9d8c-588cd27cdfb4", 00:16:04.586 "is_configured": true, 00:16:04.586 "data_offset": 0, 00:16:04.586 "data_size": 65536 00:16:04.586 }, 00:16:04.586 { 00:16:04.586 "name": "BaseBdev4", 00:16:04.586 "uuid": "8bdc5947-8ad1-45cf-9865-7345b1fd97cd", 00:16:04.586 "is_configured": true, 00:16:04.586 "data_offset": 0, 00:16:04.586 "data_size": 65536 00:16:04.586 } 00:16:04.586 ] 00:16:04.586 }' 00:16:04.586 14:48:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.586 14:48:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.854 14:48:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.854 14:48:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.854 14:48:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.854 14:48:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:04.854 14:48:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.855 14:48:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:04.855 14:48:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:04.855 14:48:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.855 14:48:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.855 [2024-12-09 14:48:42.934918] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:05.114 14:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.114 14:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:05.114 14:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:05.114 14:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:05.114 14:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:05.114 14:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.114 14:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:05.114 14:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.114 14:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.114 14:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.114 14:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.114 14:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.114 14:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.114 14:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.114 14:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.114 14:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.114 14:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.114 "name": "Existed_Raid", 00:16:05.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.114 "strip_size_kb": 64, 00:16:05.114 "state": "configuring", 00:16:05.114 "raid_level": "raid5f", 00:16:05.114 "superblock": false, 00:16:05.114 "num_base_bdevs": 4, 00:16:05.114 "num_base_bdevs_discovered": 2, 00:16:05.114 "num_base_bdevs_operational": 4, 00:16:05.114 "base_bdevs_list": [ 00:16:05.114 { 00:16:05.114 "name": null, 00:16:05.114 "uuid": "cc31b287-d6ce-4315-bf6f-8d0ef428b933", 00:16:05.114 "is_configured": false, 00:16:05.114 "data_offset": 0, 00:16:05.114 "data_size": 65536 00:16:05.114 }, 00:16:05.114 { 00:16:05.114 "name": null, 00:16:05.114 "uuid": "fd83f017-7104-4183-80b9-682a51f83c83", 00:16:05.114 "is_configured": false, 00:16:05.114 "data_offset": 0, 00:16:05.114 "data_size": 65536 00:16:05.114 }, 00:16:05.114 { 00:16:05.114 "name": "BaseBdev3", 00:16:05.114 "uuid": "ed8620d7-3d7b-404f-9d8c-588cd27cdfb4", 00:16:05.114 "is_configured": true, 00:16:05.114 "data_offset": 0, 00:16:05.114 "data_size": 65536 00:16:05.114 }, 00:16:05.114 { 00:16:05.114 "name": "BaseBdev4", 00:16:05.114 "uuid": "8bdc5947-8ad1-45cf-9865-7345b1fd97cd", 00:16:05.114 "is_configured": true, 00:16:05.114 "data_offset": 0, 00:16:05.114 "data_size": 65536 00:16:05.114 } 00:16:05.114 ] 00:16:05.114 }' 00:16:05.114 14:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.114 14:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.373 14:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.373 14:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.373 14:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.373 14:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:05.373 14:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.633 14:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:05.633 14:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:05.633 14:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.633 14:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.633 [2024-12-09 14:48:43.524060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:05.633 14:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.633 14:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:05.633 14:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:05.633 14:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:05.633 14:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:05.633 14:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.633 14:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:05.633 14:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.633 14:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.633 14:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.633 14:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.633 14:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.633 14:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.633 14:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.633 14:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.633 14:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.633 14:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.633 "name": "Existed_Raid", 00:16:05.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.633 "strip_size_kb": 64, 00:16:05.633 "state": "configuring", 00:16:05.633 "raid_level": "raid5f", 00:16:05.633 "superblock": false, 00:16:05.633 "num_base_bdevs": 4, 00:16:05.633 "num_base_bdevs_discovered": 3, 00:16:05.633 "num_base_bdevs_operational": 4, 00:16:05.633 "base_bdevs_list": [ 00:16:05.633 { 00:16:05.633 "name": null, 00:16:05.633 "uuid": "cc31b287-d6ce-4315-bf6f-8d0ef428b933", 00:16:05.633 "is_configured": false, 00:16:05.633 "data_offset": 0, 00:16:05.633 "data_size": 65536 00:16:05.633 }, 00:16:05.633 { 00:16:05.633 "name": "BaseBdev2", 00:16:05.633 "uuid": "fd83f017-7104-4183-80b9-682a51f83c83", 00:16:05.633 "is_configured": true, 00:16:05.633 "data_offset": 0, 00:16:05.633 "data_size": 65536 00:16:05.633 }, 00:16:05.633 { 00:16:05.633 "name": "BaseBdev3", 00:16:05.633 "uuid": "ed8620d7-3d7b-404f-9d8c-588cd27cdfb4", 00:16:05.633 "is_configured": true, 00:16:05.633 "data_offset": 0, 00:16:05.633 "data_size": 65536 00:16:05.633 }, 00:16:05.633 { 00:16:05.633 "name": "BaseBdev4", 00:16:05.633 "uuid": "8bdc5947-8ad1-45cf-9865-7345b1fd97cd", 00:16:05.633 "is_configured": true, 00:16:05.633 "data_offset": 0, 00:16:05.633 "data_size": 65536 00:16:05.633 } 00:16:05.633 ] 00:16:05.633 }' 00:16:05.633 14:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.633 14:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.893 14:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:05.893 14:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.893 14:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.893 14:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.893 14:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.893 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:05.893 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.893 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:05.893 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.893 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.154 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.154 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cc31b287-d6ce-4315-bf6f-8d0ef428b933 00:16:06.154 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.154 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.154 [2024-12-09 14:48:44.092511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:06.154 [2024-12-09 14:48:44.092641] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:06.154 [2024-12-09 14:48:44.092701] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:06.154 [2024-12-09 14:48:44.093010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:06.154 [2024-12-09 14:48:44.100008] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:06.154 [2024-12-09 14:48:44.100072] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:06.154 [2024-12-09 14:48:44.100372] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.154 NewBaseBdev 00:16:06.154 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.154 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:06.154 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:06.154 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:06.154 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:06.154 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:06.154 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:06.154 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:06.154 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.154 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.154 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.154 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:06.154 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.154 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.154 [ 00:16:06.154 { 00:16:06.154 "name": "NewBaseBdev", 00:16:06.154 "aliases": [ 00:16:06.154 "cc31b287-d6ce-4315-bf6f-8d0ef428b933" 00:16:06.155 ], 00:16:06.155 "product_name": "Malloc disk", 00:16:06.155 "block_size": 512, 00:16:06.155 "num_blocks": 65536, 00:16:06.155 "uuid": "cc31b287-d6ce-4315-bf6f-8d0ef428b933", 00:16:06.155 "assigned_rate_limits": { 00:16:06.155 "rw_ios_per_sec": 0, 00:16:06.155 "rw_mbytes_per_sec": 0, 00:16:06.155 "r_mbytes_per_sec": 0, 00:16:06.155 "w_mbytes_per_sec": 0 00:16:06.155 }, 00:16:06.155 "claimed": true, 00:16:06.155 "claim_type": "exclusive_write", 00:16:06.155 "zoned": false, 00:16:06.155 "supported_io_types": { 00:16:06.155 "read": true, 00:16:06.155 "write": true, 00:16:06.155 "unmap": true, 00:16:06.155 "flush": true, 00:16:06.155 "reset": true, 00:16:06.155 "nvme_admin": false, 00:16:06.155 "nvme_io": false, 00:16:06.155 "nvme_io_md": false, 00:16:06.155 "write_zeroes": true, 00:16:06.155 "zcopy": true, 00:16:06.155 "get_zone_info": false, 00:16:06.155 "zone_management": false, 00:16:06.155 "zone_append": false, 00:16:06.155 "compare": false, 00:16:06.155 "compare_and_write": false, 00:16:06.155 "abort": true, 00:16:06.155 "seek_hole": false, 00:16:06.155 "seek_data": false, 00:16:06.155 "copy": true, 00:16:06.155 "nvme_iov_md": false 00:16:06.155 }, 00:16:06.155 "memory_domains": [ 00:16:06.155 { 00:16:06.155 "dma_device_id": "system", 00:16:06.155 "dma_device_type": 1 00:16:06.155 }, 00:16:06.155 { 00:16:06.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.155 "dma_device_type": 2 00:16:06.155 } 00:16:06.155 ], 00:16:06.155 "driver_specific": {} 00:16:06.155 } 00:16:06.155 ] 00:16:06.155 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.155 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:06.155 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:06.155 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:06.155 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.155 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.155 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.155 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:06.155 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.155 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.155 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.155 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.155 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.155 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.155 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.155 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.155 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.155 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.155 "name": "Existed_Raid", 00:16:06.155 "uuid": "5fdf9672-3883-4e9b-98ec-8067848f24ba", 00:16:06.155 "strip_size_kb": 64, 00:16:06.155 "state": "online", 00:16:06.155 "raid_level": "raid5f", 00:16:06.155 "superblock": false, 00:16:06.155 "num_base_bdevs": 4, 00:16:06.155 "num_base_bdevs_discovered": 4, 00:16:06.155 "num_base_bdevs_operational": 4, 00:16:06.155 "base_bdevs_list": [ 00:16:06.155 { 00:16:06.155 "name": "NewBaseBdev", 00:16:06.155 "uuid": "cc31b287-d6ce-4315-bf6f-8d0ef428b933", 00:16:06.155 "is_configured": true, 00:16:06.155 "data_offset": 0, 00:16:06.155 "data_size": 65536 00:16:06.155 }, 00:16:06.155 { 00:16:06.155 "name": "BaseBdev2", 00:16:06.155 "uuid": "fd83f017-7104-4183-80b9-682a51f83c83", 00:16:06.155 "is_configured": true, 00:16:06.155 "data_offset": 0, 00:16:06.155 "data_size": 65536 00:16:06.155 }, 00:16:06.155 { 00:16:06.155 "name": "BaseBdev3", 00:16:06.155 "uuid": "ed8620d7-3d7b-404f-9d8c-588cd27cdfb4", 00:16:06.155 "is_configured": true, 00:16:06.155 "data_offset": 0, 00:16:06.155 "data_size": 65536 00:16:06.155 }, 00:16:06.155 { 00:16:06.155 "name": "BaseBdev4", 00:16:06.155 "uuid": "8bdc5947-8ad1-45cf-9865-7345b1fd97cd", 00:16:06.155 "is_configured": true, 00:16:06.155 "data_offset": 0, 00:16:06.155 "data_size": 65536 00:16:06.155 } 00:16:06.155 ] 00:16:06.155 }' 00:16:06.155 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.155 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.725 [2024-12-09 14:48:44.552373] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:06.725 "name": "Existed_Raid", 00:16:06.725 "aliases": [ 00:16:06.725 "5fdf9672-3883-4e9b-98ec-8067848f24ba" 00:16:06.725 ], 00:16:06.725 "product_name": "Raid Volume", 00:16:06.725 "block_size": 512, 00:16:06.725 "num_blocks": 196608, 00:16:06.725 "uuid": "5fdf9672-3883-4e9b-98ec-8067848f24ba", 00:16:06.725 "assigned_rate_limits": { 00:16:06.725 "rw_ios_per_sec": 0, 00:16:06.725 "rw_mbytes_per_sec": 0, 00:16:06.725 "r_mbytes_per_sec": 0, 00:16:06.725 "w_mbytes_per_sec": 0 00:16:06.725 }, 00:16:06.725 "claimed": false, 00:16:06.725 "zoned": false, 00:16:06.725 "supported_io_types": { 00:16:06.725 "read": true, 00:16:06.725 "write": true, 00:16:06.725 "unmap": false, 00:16:06.725 "flush": false, 00:16:06.725 "reset": true, 00:16:06.725 "nvme_admin": false, 00:16:06.725 "nvme_io": false, 00:16:06.725 "nvme_io_md": false, 00:16:06.725 "write_zeroes": true, 00:16:06.725 "zcopy": false, 00:16:06.725 "get_zone_info": false, 00:16:06.725 "zone_management": false, 00:16:06.725 "zone_append": false, 00:16:06.725 "compare": false, 00:16:06.725 "compare_and_write": false, 00:16:06.725 "abort": false, 00:16:06.725 "seek_hole": false, 00:16:06.725 "seek_data": false, 00:16:06.725 "copy": false, 00:16:06.725 "nvme_iov_md": false 00:16:06.725 }, 00:16:06.725 "driver_specific": { 00:16:06.725 "raid": { 00:16:06.725 "uuid": "5fdf9672-3883-4e9b-98ec-8067848f24ba", 00:16:06.725 "strip_size_kb": 64, 00:16:06.725 "state": "online", 00:16:06.725 "raid_level": "raid5f", 00:16:06.725 "superblock": false, 00:16:06.725 "num_base_bdevs": 4, 00:16:06.725 "num_base_bdevs_discovered": 4, 00:16:06.725 "num_base_bdevs_operational": 4, 00:16:06.725 "base_bdevs_list": [ 00:16:06.725 { 00:16:06.725 "name": "NewBaseBdev", 00:16:06.725 "uuid": "cc31b287-d6ce-4315-bf6f-8d0ef428b933", 00:16:06.725 "is_configured": true, 00:16:06.725 "data_offset": 0, 00:16:06.725 "data_size": 65536 00:16:06.725 }, 00:16:06.725 { 00:16:06.725 "name": "BaseBdev2", 00:16:06.725 "uuid": "fd83f017-7104-4183-80b9-682a51f83c83", 00:16:06.725 "is_configured": true, 00:16:06.725 "data_offset": 0, 00:16:06.725 "data_size": 65536 00:16:06.725 }, 00:16:06.725 { 00:16:06.725 "name": "BaseBdev3", 00:16:06.725 "uuid": "ed8620d7-3d7b-404f-9d8c-588cd27cdfb4", 00:16:06.725 "is_configured": true, 00:16:06.725 "data_offset": 0, 00:16:06.725 "data_size": 65536 00:16:06.725 }, 00:16:06.725 { 00:16:06.725 "name": "BaseBdev4", 00:16:06.725 "uuid": "8bdc5947-8ad1-45cf-9865-7345b1fd97cd", 00:16:06.725 "is_configured": true, 00:16:06.725 "data_offset": 0, 00:16:06.725 "data_size": 65536 00:16:06.725 } 00:16:06.725 ] 00:16:06.725 } 00:16:06.725 } 00:16:06.725 }' 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:06.725 BaseBdev2 00:16:06.725 BaseBdev3 00:16:06.725 BaseBdev4' 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:06.725 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.984 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:06.985 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:06.985 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:06.985 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.985 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.985 [2024-12-09 14:48:44.883611] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:06.985 [2024-12-09 14:48:44.883709] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:06.985 [2024-12-09 14:48:44.883809] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:06.985 [2024-12-09 14:48:44.884163] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:06.985 [2024-12-09 14:48:44.884179] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:06.985 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.985 14:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 84082 00:16:06.985 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 84082 ']' 00:16:06.985 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 84082 00:16:06.985 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:06.985 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:06.985 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84082 00:16:06.985 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:06.985 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:06.985 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84082' 00:16:06.985 killing process with pid 84082 00:16:06.985 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 84082 00:16:06.985 [2024-12-09 14:48:44.918229] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:06.985 14:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 84082 00:16:07.243 [2024-12-09 14:48:45.304570] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:08.623 ************************************ 00:16:08.623 END TEST raid5f_state_function_test 00:16:08.623 ************************************ 00:16:08.623 00:16:08.623 real 0m11.605s 00:16:08.623 user 0m18.489s 00:16:08.623 sys 0m2.085s 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.623 14:48:46 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:16:08.623 14:48:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:08.623 14:48:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:08.623 14:48:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:08.623 ************************************ 00:16:08.623 START TEST raid5f_state_function_test_sb 00:16:08.623 ************************************ 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:08.623 Process raid pid: 84761 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84761 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84761' 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84761 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 84761 ']' 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:08.623 14:48:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.623 [2024-12-09 14:48:46.570565] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:16:08.623 [2024-12-09 14:48:46.570797] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:08.882 [2024-12-09 14:48:46.746062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.882 [2024-12-09 14:48:46.858817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.142 [2024-12-09 14:48:47.053944] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:09.142 [2024-12-09 14:48:47.054076] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:09.401 14:48:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:09.401 14:48:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:09.401 14:48:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:09.401 14:48:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.401 14:48:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.401 [2024-12-09 14:48:47.398490] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:09.401 [2024-12-09 14:48:47.398546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:09.401 [2024-12-09 14:48:47.398556] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:09.401 [2024-12-09 14:48:47.398566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:09.401 [2024-12-09 14:48:47.398584] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:09.401 [2024-12-09 14:48:47.398593] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:09.401 [2024-12-09 14:48:47.398599] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:09.402 [2024-12-09 14:48:47.398607] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:09.402 14:48:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.402 14:48:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:09.402 14:48:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:09.402 14:48:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:09.402 14:48:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:09.402 14:48:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.402 14:48:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:09.402 14:48:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.402 14:48:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.402 14:48:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.402 14:48:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.402 14:48:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.402 14:48:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.402 14:48:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.402 14:48:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.402 14:48:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.402 14:48:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.402 "name": "Existed_Raid", 00:16:09.402 "uuid": "686043de-39ae-4504-810c-66d5a83a733e", 00:16:09.402 "strip_size_kb": 64, 00:16:09.402 "state": "configuring", 00:16:09.402 "raid_level": "raid5f", 00:16:09.402 "superblock": true, 00:16:09.402 "num_base_bdevs": 4, 00:16:09.402 "num_base_bdevs_discovered": 0, 00:16:09.402 "num_base_bdevs_operational": 4, 00:16:09.402 "base_bdevs_list": [ 00:16:09.402 { 00:16:09.402 "name": "BaseBdev1", 00:16:09.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.402 "is_configured": false, 00:16:09.402 "data_offset": 0, 00:16:09.402 "data_size": 0 00:16:09.402 }, 00:16:09.402 { 00:16:09.402 "name": "BaseBdev2", 00:16:09.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.402 "is_configured": false, 00:16:09.402 "data_offset": 0, 00:16:09.402 "data_size": 0 00:16:09.402 }, 00:16:09.402 { 00:16:09.402 "name": "BaseBdev3", 00:16:09.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.402 "is_configured": false, 00:16:09.402 "data_offset": 0, 00:16:09.402 "data_size": 0 00:16:09.402 }, 00:16:09.402 { 00:16:09.402 "name": "BaseBdev4", 00:16:09.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.402 "is_configured": false, 00:16:09.402 "data_offset": 0, 00:16:09.402 "data_size": 0 00:16:09.402 } 00:16:09.402 ] 00:16:09.402 }' 00:16:09.402 14:48:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.402 14:48:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.972 14:48:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:09.972 14:48:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.972 14:48:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.972 [2024-12-09 14:48:47.857677] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:09.972 [2024-12-09 14:48:47.857777] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:09.972 14:48:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.972 14:48:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:09.972 14:48:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.972 14:48:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.972 [2024-12-09 14:48:47.869677] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:09.972 [2024-12-09 14:48:47.869756] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:09.972 [2024-12-09 14:48:47.869784] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:09.972 [2024-12-09 14:48:47.869807] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:09.972 [2024-12-09 14:48:47.869826] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:09.972 [2024-12-09 14:48:47.869848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:09.972 [2024-12-09 14:48:47.869872] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:09.972 [2024-12-09 14:48:47.869900] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:09.972 14:48:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.972 14:48:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:09.972 14:48:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.972 14:48:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.972 [2024-12-09 14:48:47.915917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:09.972 BaseBdev1 00:16:09.972 14:48:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.972 14:48:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:09.972 14:48:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:09.972 14:48:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:09.972 14:48:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:09.972 14:48:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:09.972 14:48:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:09.972 14:48:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:09.972 14:48:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.972 14:48:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.972 14:48:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.972 14:48:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:09.972 14:48:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.972 14:48:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.972 [ 00:16:09.972 { 00:16:09.972 "name": "BaseBdev1", 00:16:09.972 "aliases": [ 00:16:09.972 "d8650ab2-ac7c-4dd8-bb3a-b90f5598a8f5" 00:16:09.972 ], 00:16:09.972 "product_name": "Malloc disk", 00:16:09.972 "block_size": 512, 00:16:09.972 "num_blocks": 65536, 00:16:09.972 "uuid": "d8650ab2-ac7c-4dd8-bb3a-b90f5598a8f5", 00:16:09.972 "assigned_rate_limits": { 00:16:09.972 "rw_ios_per_sec": 0, 00:16:09.972 "rw_mbytes_per_sec": 0, 00:16:09.972 "r_mbytes_per_sec": 0, 00:16:09.972 "w_mbytes_per_sec": 0 00:16:09.972 }, 00:16:09.972 "claimed": true, 00:16:09.972 "claim_type": "exclusive_write", 00:16:09.972 "zoned": false, 00:16:09.972 "supported_io_types": { 00:16:09.972 "read": true, 00:16:09.972 "write": true, 00:16:09.972 "unmap": true, 00:16:09.972 "flush": true, 00:16:09.972 "reset": true, 00:16:09.972 "nvme_admin": false, 00:16:09.972 "nvme_io": false, 00:16:09.972 "nvme_io_md": false, 00:16:09.972 "write_zeroes": true, 00:16:09.972 "zcopy": true, 00:16:09.972 "get_zone_info": false, 00:16:09.972 "zone_management": false, 00:16:09.972 "zone_append": false, 00:16:09.972 "compare": false, 00:16:09.972 "compare_and_write": false, 00:16:09.972 "abort": true, 00:16:09.972 "seek_hole": false, 00:16:09.972 "seek_data": false, 00:16:09.972 "copy": true, 00:16:09.972 "nvme_iov_md": false 00:16:09.972 }, 00:16:09.972 "memory_domains": [ 00:16:09.972 { 00:16:09.972 "dma_device_id": "system", 00:16:09.972 "dma_device_type": 1 00:16:09.972 }, 00:16:09.972 { 00:16:09.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.972 "dma_device_type": 2 00:16:09.972 } 00:16:09.972 ], 00:16:09.972 "driver_specific": {} 00:16:09.972 } 00:16:09.972 ] 00:16:09.972 14:48:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.972 14:48:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:09.972 14:48:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:09.972 14:48:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:09.972 14:48:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:09.972 14:48:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:09.972 14:48:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.973 14:48:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:09.973 14:48:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.973 14:48:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.973 14:48:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.973 14:48:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.973 14:48:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.973 14:48:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.973 14:48:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.973 14:48:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.973 14:48:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.973 14:48:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.973 "name": "Existed_Raid", 00:16:09.973 "uuid": "93d1c6f1-8351-48a4-8851-d0d5216fb10c", 00:16:09.973 "strip_size_kb": 64, 00:16:09.973 "state": "configuring", 00:16:09.973 "raid_level": "raid5f", 00:16:09.973 "superblock": true, 00:16:09.973 "num_base_bdevs": 4, 00:16:09.973 "num_base_bdevs_discovered": 1, 00:16:09.973 "num_base_bdevs_operational": 4, 00:16:09.973 "base_bdevs_list": [ 00:16:09.973 { 00:16:09.973 "name": "BaseBdev1", 00:16:09.973 "uuid": "d8650ab2-ac7c-4dd8-bb3a-b90f5598a8f5", 00:16:09.973 "is_configured": true, 00:16:09.973 "data_offset": 2048, 00:16:09.973 "data_size": 63488 00:16:09.973 }, 00:16:09.973 { 00:16:09.973 "name": "BaseBdev2", 00:16:09.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.973 "is_configured": false, 00:16:09.973 "data_offset": 0, 00:16:09.973 "data_size": 0 00:16:09.973 }, 00:16:09.973 { 00:16:09.973 "name": "BaseBdev3", 00:16:09.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.973 "is_configured": false, 00:16:09.973 "data_offset": 0, 00:16:09.973 "data_size": 0 00:16:09.973 }, 00:16:09.973 { 00:16:09.973 "name": "BaseBdev4", 00:16:09.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.973 "is_configured": false, 00:16:09.973 "data_offset": 0, 00:16:09.973 "data_size": 0 00:16:09.973 } 00:16:09.973 ] 00:16:09.973 }' 00:16:09.973 14:48:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.973 14:48:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.540 14:48:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:10.540 14:48:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.540 14:48:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.540 [2024-12-09 14:48:48.439088] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:10.540 [2024-12-09 14:48:48.439212] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:10.540 14:48:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.540 14:48:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:10.540 14:48:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.540 14:48:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.540 [2024-12-09 14:48:48.451119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:10.540 [2024-12-09 14:48:48.452912] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:10.541 [2024-12-09 14:48:48.452956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:10.541 [2024-12-09 14:48:48.452966] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:10.541 [2024-12-09 14:48:48.452976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:10.541 [2024-12-09 14:48:48.452983] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:10.541 [2024-12-09 14:48:48.452991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:10.541 14:48:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.541 14:48:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:10.541 14:48:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:10.541 14:48:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:10.541 14:48:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:10.541 14:48:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:10.541 14:48:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:10.541 14:48:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.541 14:48:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:10.541 14:48:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.541 14:48:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.541 14:48:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.541 14:48:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.541 14:48:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.541 14:48:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.541 14:48:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.541 14:48:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.541 14:48:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.541 14:48:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.541 "name": "Existed_Raid", 00:16:10.541 "uuid": "407355f7-deb2-498a-b4d6-3610da3ff9c5", 00:16:10.541 "strip_size_kb": 64, 00:16:10.541 "state": "configuring", 00:16:10.541 "raid_level": "raid5f", 00:16:10.541 "superblock": true, 00:16:10.541 "num_base_bdevs": 4, 00:16:10.541 "num_base_bdevs_discovered": 1, 00:16:10.541 "num_base_bdevs_operational": 4, 00:16:10.541 "base_bdevs_list": [ 00:16:10.541 { 00:16:10.541 "name": "BaseBdev1", 00:16:10.541 "uuid": "d8650ab2-ac7c-4dd8-bb3a-b90f5598a8f5", 00:16:10.541 "is_configured": true, 00:16:10.541 "data_offset": 2048, 00:16:10.541 "data_size": 63488 00:16:10.541 }, 00:16:10.541 { 00:16:10.541 "name": "BaseBdev2", 00:16:10.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.541 "is_configured": false, 00:16:10.541 "data_offset": 0, 00:16:10.541 "data_size": 0 00:16:10.541 }, 00:16:10.541 { 00:16:10.541 "name": "BaseBdev3", 00:16:10.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.541 "is_configured": false, 00:16:10.541 "data_offset": 0, 00:16:10.541 "data_size": 0 00:16:10.541 }, 00:16:10.541 { 00:16:10.541 "name": "BaseBdev4", 00:16:10.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.541 "is_configured": false, 00:16:10.541 "data_offset": 0, 00:16:10.541 "data_size": 0 00:16:10.541 } 00:16:10.541 ] 00:16:10.541 }' 00:16:10.541 14:48:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.541 14:48:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.800 14:48:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:10.800 14:48:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.800 14:48:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.060 [2024-12-09 14:48:48.950301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:11.060 BaseBdev2 00:16:11.060 14:48:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.060 14:48:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:11.060 14:48:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:11.060 14:48:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:11.060 14:48:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:11.060 14:48:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:11.060 14:48:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:11.060 14:48:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:11.060 14:48:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.060 14:48:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.060 14:48:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.060 14:48:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:11.060 14:48:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.060 14:48:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.060 [ 00:16:11.060 { 00:16:11.060 "name": "BaseBdev2", 00:16:11.060 "aliases": [ 00:16:11.060 "d620e6ad-c84b-4ca6-8035-3c526d9ce7cb" 00:16:11.060 ], 00:16:11.060 "product_name": "Malloc disk", 00:16:11.060 "block_size": 512, 00:16:11.060 "num_blocks": 65536, 00:16:11.060 "uuid": "d620e6ad-c84b-4ca6-8035-3c526d9ce7cb", 00:16:11.060 "assigned_rate_limits": { 00:16:11.060 "rw_ios_per_sec": 0, 00:16:11.060 "rw_mbytes_per_sec": 0, 00:16:11.060 "r_mbytes_per_sec": 0, 00:16:11.060 "w_mbytes_per_sec": 0 00:16:11.060 }, 00:16:11.060 "claimed": true, 00:16:11.060 "claim_type": "exclusive_write", 00:16:11.060 "zoned": false, 00:16:11.060 "supported_io_types": { 00:16:11.060 "read": true, 00:16:11.060 "write": true, 00:16:11.060 "unmap": true, 00:16:11.060 "flush": true, 00:16:11.060 "reset": true, 00:16:11.060 "nvme_admin": false, 00:16:11.060 "nvme_io": false, 00:16:11.060 "nvme_io_md": false, 00:16:11.060 "write_zeroes": true, 00:16:11.060 "zcopy": true, 00:16:11.060 "get_zone_info": false, 00:16:11.060 "zone_management": false, 00:16:11.060 "zone_append": false, 00:16:11.060 "compare": false, 00:16:11.060 "compare_and_write": false, 00:16:11.060 "abort": true, 00:16:11.060 "seek_hole": false, 00:16:11.060 "seek_data": false, 00:16:11.060 "copy": true, 00:16:11.060 "nvme_iov_md": false 00:16:11.060 }, 00:16:11.060 "memory_domains": [ 00:16:11.060 { 00:16:11.060 "dma_device_id": "system", 00:16:11.060 "dma_device_type": 1 00:16:11.060 }, 00:16:11.060 { 00:16:11.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.060 "dma_device_type": 2 00:16:11.060 } 00:16:11.060 ], 00:16:11.060 "driver_specific": {} 00:16:11.060 } 00:16:11.060 ] 00:16:11.060 14:48:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.060 14:48:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:11.060 14:48:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:11.060 14:48:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:11.060 14:48:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:11.061 14:48:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:11.061 14:48:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:11.061 14:48:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:11.061 14:48:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:11.061 14:48:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:11.061 14:48:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.061 14:48:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.061 14:48:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.061 14:48:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.061 14:48:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.061 14:48:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.061 14:48:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.061 14:48:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.061 14:48:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.061 14:48:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.061 "name": "Existed_Raid", 00:16:11.061 "uuid": "407355f7-deb2-498a-b4d6-3610da3ff9c5", 00:16:11.061 "strip_size_kb": 64, 00:16:11.061 "state": "configuring", 00:16:11.061 "raid_level": "raid5f", 00:16:11.061 "superblock": true, 00:16:11.061 "num_base_bdevs": 4, 00:16:11.061 "num_base_bdevs_discovered": 2, 00:16:11.061 "num_base_bdevs_operational": 4, 00:16:11.061 "base_bdevs_list": [ 00:16:11.061 { 00:16:11.061 "name": "BaseBdev1", 00:16:11.061 "uuid": "d8650ab2-ac7c-4dd8-bb3a-b90f5598a8f5", 00:16:11.061 "is_configured": true, 00:16:11.061 "data_offset": 2048, 00:16:11.061 "data_size": 63488 00:16:11.061 }, 00:16:11.061 { 00:16:11.061 "name": "BaseBdev2", 00:16:11.061 "uuid": "d620e6ad-c84b-4ca6-8035-3c526d9ce7cb", 00:16:11.061 "is_configured": true, 00:16:11.061 "data_offset": 2048, 00:16:11.061 "data_size": 63488 00:16:11.061 }, 00:16:11.061 { 00:16:11.061 "name": "BaseBdev3", 00:16:11.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.061 "is_configured": false, 00:16:11.061 "data_offset": 0, 00:16:11.061 "data_size": 0 00:16:11.061 }, 00:16:11.061 { 00:16:11.061 "name": "BaseBdev4", 00:16:11.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.061 "is_configured": false, 00:16:11.061 "data_offset": 0, 00:16:11.061 "data_size": 0 00:16:11.061 } 00:16:11.061 ] 00:16:11.061 }' 00:16:11.061 14:48:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.061 14:48:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.321 14:48:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:11.321 14:48:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.321 14:48:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.581 [2024-12-09 14:48:49.464527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:11.581 BaseBdev3 00:16:11.581 14:48:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.581 14:48:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:11.581 14:48:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:11.581 14:48:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:11.581 14:48:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:11.581 14:48:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:11.581 14:48:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:11.581 14:48:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:11.581 14:48:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.581 14:48:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.581 14:48:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.581 14:48:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:11.581 14:48:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.581 14:48:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.581 [ 00:16:11.581 { 00:16:11.581 "name": "BaseBdev3", 00:16:11.581 "aliases": [ 00:16:11.581 "fc8fbe86-1b3d-48fd-9e3f-94147b4527ca" 00:16:11.581 ], 00:16:11.581 "product_name": "Malloc disk", 00:16:11.581 "block_size": 512, 00:16:11.581 "num_blocks": 65536, 00:16:11.581 "uuid": "fc8fbe86-1b3d-48fd-9e3f-94147b4527ca", 00:16:11.581 "assigned_rate_limits": { 00:16:11.581 "rw_ios_per_sec": 0, 00:16:11.581 "rw_mbytes_per_sec": 0, 00:16:11.581 "r_mbytes_per_sec": 0, 00:16:11.581 "w_mbytes_per_sec": 0 00:16:11.581 }, 00:16:11.581 "claimed": true, 00:16:11.581 "claim_type": "exclusive_write", 00:16:11.581 "zoned": false, 00:16:11.581 "supported_io_types": { 00:16:11.581 "read": true, 00:16:11.581 "write": true, 00:16:11.581 "unmap": true, 00:16:11.581 "flush": true, 00:16:11.581 "reset": true, 00:16:11.581 "nvme_admin": false, 00:16:11.581 "nvme_io": false, 00:16:11.581 "nvme_io_md": false, 00:16:11.581 "write_zeroes": true, 00:16:11.581 "zcopy": true, 00:16:11.581 "get_zone_info": false, 00:16:11.581 "zone_management": false, 00:16:11.581 "zone_append": false, 00:16:11.581 "compare": false, 00:16:11.581 "compare_and_write": false, 00:16:11.581 "abort": true, 00:16:11.581 "seek_hole": false, 00:16:11.581 "seek_data": false, 00:16:11.581 "copy": true, 00:16:11.581 "nvme_iov_md": false 00:16:11.581 }, 00:16:11.581 "memory_domains": [ 00:16:11.581 { 00:16:11.581 "dma_device_id": "system", 00:16:11.581 "dma_device_type": 1 00:16:11.581 }, 00:16:11.581 { 00:16:11.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.581 "dma_device_type": 2 00:16:11.581 } 00:16:11.581 ], 00:16:11.581 "driver_specific": {} 00:16:11.581 } 00:16:11.581 ] 00:16:11.581 14:48:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.581 14:48:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:11.581 14:48:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:11.581 14:48:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:11.581 14:48:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:11.581 14:48:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:11.581 14:48:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:11.581 14:48:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:11.581 14:48:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:11.581 14:48:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:11.581 14:48:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.581 14:48:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.581 14:48:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.581 14:48:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.581 14:48:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.581 14:48:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.581 14:48:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.581 14:48:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.581 14:48:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.581 14:48:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.581 "name": "Existed_Raid", 00:16:11.581 "uuid": "407355f7-deb2-498a-b4d6-3610da3ff9c5", 00:16:11.581 "strip_size_kb": 64, 00:16:11.581 "state": "configuring", 00:16:11.581 "raid_level": "raid5f", 00:16:11.581 "superblock": true, 00:16:11.581 "num_base_bdevs": 4, 00:16:11.581 "num_base_bdevs_discovered": 3, 00:16:11.581 "num_base_bdevs_operational": 4, 00:16:11.581 "base_bdevs_list": [ 00:16:11.581 { 00:16:11.581 "name": "BaseBdev1", 00:16:11.581 "uuid": "d8650ab2-ac7c-4dd8-bb3a-b90f5598a8f5", 00:16:11.581 "is_configured": true, 00:16:11.581 "data_offset": 2048, 00:16:11.581 "data_size": 63488 00:16:11.581 }, 00:16:11.581 { 00:16:11.581 "name": "BaseBdev2", 00:16:11.582 "uuid": "d620e6ad-c84b-4ca6-8035-3c526d9ce7cb", 00:16:11.582 "is_configured": true, 00:16:11.582 "data_offset": 2048, 00:16:11.582 "data_size": 63488 00:16:11.582 }, 00:16:11.582 { 00:16:11.582 "name": "BaseBdev3", 00:16:11.582 "uuid": "fc8fbe86-1b3d-48fd-9e3f-94147b4527ca", 00:16:11.582 "is_configured": true, 00:16:11.582 "data_offset": 2048, 00:16:11.582 "data_size": 63488 00:16:11.582 }, 00:16:11.582 { 00:16:11.582 "name": "BaseBdev4", 00:16:11.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.582 "is_configured": false, 00:16:11.582 "data_offset": 0, 00:16:11.582 "data_size": 0 00:16:11.582 } 00:16:11.582 ] 00:16:11.582 }' 00:16:11.582 14:48:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.582 14:48:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.841 14:48:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:11.841 14:48:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.841 14:48:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.101 [2024-12-09 14:48:49.981935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:12.101 [2024-12-09 14:48:49.982365] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:12.101 [2024-12-09 14:48:49.982388] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:12.101 [2024-12-09 14:48:49.982684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:12.101 BaseBdev4 00:16:12.101 14:48:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.101 14:48:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:12.101 14:48:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:12.101 14:48:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:12.101 14:48:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:12.101 14:48:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:12.101 14:48:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:12.101 14:48:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:12.101 14:48:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.101 14:48:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.101 [2024-12-09 14:48:49.990404] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:12.101 [2024-12-09 14:48:49.990466] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:12.101 [2024-12-09 14:48:49.990794] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:12.101 14:48:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.101 14:48:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:12.101 14:48:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.101 14:48:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.101 [ 00:16:12.101 { 00:16:12.101 "name": "BaseBdev4", 00:16:12.101 "aliases": [ 00:16:12.101 "936ea0e5-0aaa-4867-9344-4652ce581ec6" 00:16:12.101 ], 00:16:12.101 "product_name": "Malloc disk", 00:16:12.101 "block_size": 512, 00:16:12.101 "num_blocks": 65536, 00:16:12.101 "uuid": "936ea0e5-0aaa-4867-9344-4652ce581ec6", 00:16:12.101 "assigned_rate_limits": { 00:16:12.101 "rw_ios_per_sec": 0, 00:16:12.101 "rw_mbytes_per_sec": 0, 00:16:12.101 "r_mbytes_per_sec": 0, 00:16:12.101 "w_mbytes_per_sec": 0 00:16:12.101 }, 00:16:12.101 "claimed": true, 00:16:12.101 "claim_type": "exclusive_write", 00:16:12.101 "zoned": false, 00:16:12.101 "supported_io_types": { 00:16:12.101 "read": true, 00:16:12.101 "write": true, 00:16:12.101 "unmap": true, 00:16:12.101 "flush": true, 00:16:12.101 "reset": true, 00:16:12.101 "nvme_admin": false, 00:16:12.101 "nvme_io": false, 00:16:12.101 "nvme_io_md": false, 00:16:12.101 "write_zeroes": true, 00:16:12.101 "zcopy": true, 00:16:12.101 "get_zone_info": false, 00:16:12.101 "zone_management": false, 00:16:12.101 "zone_append": false, 00:16:12.101 "compare": false, 00:16:12.101 "compare_and_write": false, 00:16:12.101 "abort": true, 00:16:12.101 "seek_hole": false, 00:16:12.101 "seek_data": false, 00:16:12.101 "copy": true, 00:16:12.101 "nvme_iov_md": false 00:16:12.101 }, 00:16:12.101 "memory_domains": [ 00:16:12.101 { 00:16:12.101 "dma_device_id": "system", 00:16:12.101 "dma_device_type": 1 00:16:12.101 }, 00:16:12.101 { 00:16:12.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.101 "dma_device_type": 2 00:16:12.101 } 00:16:12.101 ], 00:16:12.101 "driver_specific": {} 00:16:12.101 } 00:16:12.101 ] 00:16:12.101 14:48:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.101 14:48:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:12.101 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:12.101 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:12.101 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:12.101 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:12.101 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.101 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:12.101 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:12.101 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:12.101 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.101 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.101 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.101 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.101 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.101 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.101 14:48:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.101 14:48:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.101 14:48:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.101 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.101 "name": "Existed_Raid", 00:16:12.101 "uuid": "407355f7-deb2-498a-b4d6-3610da3ff9c5", 00:16:12.101 "strip_size_kb": 64, 00:16:12.102 "state": "online", 00:16:12.102 "raid_level": "raid5f", 00:16:12.102 "superblock": true, 00:16:12.102 "num_base_bdevs": 4, 00:16:12.102 "num_base_bdevs_discovered": 4, 00:16:12.102 "num_base_bdevs_operational": 4, 00:16:12.102 "base_bdevs_list": [ 00:16:12.102 { 00:16:12.102 "name": "BaseBdev1", 00:16:12.102 "uuid": "d8650ab2-ac7c-4dd8-bb3a-b90f5598a8f5", 00:16:12.102 "is_configured": true, 00:16:12.102 "data_offset": 2048, 00:16:12.102 "data_size": 63488 00:16:12.102 }, 00:16:12.102 { 00:16:12.102 "name": "BaseBdev2", 00:16:12.102 "uuid": "d620e6ad-c84b-4ca6-8035-3c526d9ce7cb", 00:16:12.102 "is_configured": true, 00:16:12.102 "data_offset": 2048, 00:16:12.102 "data_size": 63488 00:16:12.102 }, 00:16:12.102 { 00:16:12.102 "name": "BaseBdev3", 00:16:12.102 "uuid": "fc8fbe86-1b3d-48fd-9e3f-94147b4527ca", 00:16:12.102 "is_configured": true, 00:16:12.102 "data_offset": 2048, 00:16:12.102 "data_size": 63488 00:16:12.102 }, 00:16:12.102 { 00:16:12.102 "name": "BaseBdev4", 00:16:12.102 "uuid": "936ea0e5-0aaa-4867-9344-4652ce581ec6", 00:16:12.102 "is_configured": true, 00:16:12.102 "data_offset": 2048, 00:16:12.102 "data_size": 63488 00:16:12.102 } 00:16:12.102 ] 00:16:12.102 }' 00:16:12.102 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.102 14:48:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.361 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:12.361 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:12.361 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:12.361 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:12.361 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:12.361 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:12.361 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:12.361 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:12.361 14:48:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.361 14:48:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.361 [2024-12-09 14:48:50.462229] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:12.361 14:48:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.621 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:12.621 "name": "Existed_Raid", 00:16:12.621 "aliases": [ 00:16:12.621 "407355f7-deb2-498a-b4d6-3610da3ff9c5" 00:16:12.621 ], 00:16:12.621 "product_name": "Raid Volume", 00:16:12.621 "block_size": 512, 00:16:12.621 "num_blocks": 190464, 00:16:12.621 "uuid": "407355f7-deb2-498a-b4d6-3610da3ff9c5", 00:16:12.621 "assigned_rate_limits": { 00:16:12.621 "rw_ios_per_sec": 0, 00:16:12.621 "rw_mbytes_per_sec": 0, 00:16:12.621 "r_mbytes_per_sec": 0, 00:16:12.621 "w_mbytes_per_sec": 0 00:16:12.621 }, 00:16:12.621 "claimed": false, 00:16:12.621 "zoned": false, 00:16:12.621 "supported_io_types": { 00:16:12.621 "read": true, 00:16:12.621 "write": true, 00:16:12.621 "unmap": false, 00:16:12.621 "flush": false, 00:16:12.621 "reset": true, 00:16:12.621 "nvme_admin": false, 00:16:12.621 "nvme_io": false, 00:16:12.621 "nvme_io_md": false, 00:16:12.621 "write_zeroes": true, 00:16:12.621 "zcopy": false, 00:16:12.621 "get_zone_info": false, 00:16:12.621 "zone_management": false, 00:16:12.621 "zone_append": false, 00:16:12.621 "compare": false, 00:16:12.621 "compare_and_write": false, 00:16:12.621 "abort": false, 00:16:12.621 "seek_hole": false, 00:16:12.621 "seek_data": false, 00:16:12.621 "copy": false, 00:16:12.621 "nvme_iov_md": false 00:16:12.621 }, 00:16:12.621 "driver_specific": { 00:16:12.621 "raid": { 00:16:12.621 "uuid": "407355f7-deb2-498a-b4d6-3610da3ff9c5", 00:16:12.621 "strip_size_kb": 64, 00:16:12.621 "state": "online", 00:16:12.621 "raid_level": "raid5f", 00:16:12.621 "superblock": true, 00:16:12.621 "num_base_bdevs": 4, 00:16:12.621 "num_base_bdevs_discovered": 4, 00:16:12.621 "num_base_bdevs_operational": 4, 00:16:12.621 "base_bdevs_list": [ 00:16:12.621 { 00:16:12.621 "name": "BaseBdev1", 00:16:12.621 "uuid": "d8650ab2-ac7c-4dd8-bb3a-b90f5598a8f5", 00:16:12.621 "is_configured": true, 00:16:12.621 "data_offset": 2048, 00:16:12.621 "data_size": 63488 00:16:12.621 }, 00:16:12.621 { 00:16:12.621 "name": "BaseBdev2", 00:16:12.621 "uuid": "d620e6ad-c84b-4ca6-8035-3c526d9ce7cb", 00:16:12.621 "is_configured": true, 00:16:12.621 "data_offset": 2048, 00:16:12.621 "data_size": 63488 00:16:12.621 }, 00:16:12.621 { 00:16:12.621 "name": "BaseBdev3", 00:16:12.621 "uuid": "fc8fbe86-1b3d-48fd-9e3f-94147b4527ca", 00:16:12.621 "is_configured": true, 00:16:12.621 "data_offset": 2048, 00:16:12.621 "data_size": 63488 00:16:12.621 }, 00:16:12.621 { 00:16:12.621 "name": "BaseBdev4", 00:16:12.621 "uuid": "936ea0e5-0aaa-4867-9344-4652ce581ec6", 00:16:12.621 "is_configured": true, 00:16:12.621 "data_offset": 2048, 00:16:12.621 "data_size": 63488 00:16:12.621 } 00:16:12.621 ] 00:16:12.621 } 00:16:12.621 } 00:16:12.621 }' 00:16:12.621 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:12.621 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:12.621 BaseBdev2 00:16:12.621 BaseBdev3 00:16:12.621 BaseBdev4' 00:16:12.621 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:12.621 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:12.621 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:12.621 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:12.621 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:12.621 14:48:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.621 14:48:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.621 14:48:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.621 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:12.621 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:12.621 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:12.621 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:12.621 14:48:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.621 14:48:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.621 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:12.621 14:48:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.621 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:12.621 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:12.621 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:12.621 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:12.622 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:12.622 14:48:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.622 14:48:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.622 14:48:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.622 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:12.622 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:12.622 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:12.622 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:12.622 14:48:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.622 14:48:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.622 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:12.622 14:48:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.882 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:12.882 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:12.882 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:12.882 14:48:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.882 14:48:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.882 [2024-12-09 14:48:50.781581] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:12.882 14:48:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.882 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:12.882 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:12.882 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:12.882 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:12.882 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:12.882 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:12.882 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:12.882 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.882 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:12.882 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:12.882 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:12.882 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.882 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.882 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.882 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.882 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.882 14:48:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.882 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.882 14:48:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.882 14:48:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.882 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.882 "name": "Existed_Raid", 00:16:12.882 "uuid": "407355f7-deb2-498a-b4d6-3610da3ff9c5", 00:16:12.882 "strip_size_kb": 64, 00:16:12.882 "state": "online", 00:16:12.882 "raid_level": "raid5f", 00:16:12.882 "superblock": true, 00:16:12.882 "num_base_bdevs": 4, 00:16:12.882 "num_base_bdevs_discovered": 3, 00:16:12.882 "num_base_bdevs_operational": 3, 00:16:12.882 "base_bdevs_list": [ 00:16:12.882 { 00:16:12.882 "name": null, 00:16:12.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.882 "is_configured": false, 00:16:12.882 "data_offset": 0, 00:16:12.882 "data_size": 63488 00:16:12.882 }, 00:16:12.882 { 00:16:12.882 "name": "BaseBdev2", 00:16:12.882 "uuid": "d620e6ad-c84b-4ca6-8035-3c526d9ce7cb", 00:16:12.882 "is_configured": true, 00:16:12.882 "data_offset": 2048, 00:16:12.882 "data_size": 63488 00:16:12.882 }, 00:16:12.882 { 00:16:12.882 "name": "BaseBdev3", 00:16:12.882 "uuid": "fc8fbe86-1b3d-48fd-9e3f-94147b4527ca", 00:16:12.882 "is_configured": true, 00:16:12.882 "data_offset": 2048, 00:16:12.882 "data_size": 63488 00:16:12.882 }, 00:16:12.882 { 00:16:12.882 "name": "BaseBdev4", 00:16:12.882 "uuid": "936ea0e5-0aaa-4867-9344-4652ce581ec6", 00:16:12.882 "is_configured": true, 00:16:12.882 "data_offset": 2048, 00:16:12.882 "data_size": 63488 00:16:12.882 } 00:16:12.882 ] 00:16:12.882 }' 00:16:12.882 14:48:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.882 14:48:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.459 14:48:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:13.459 14:48:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:13.459 14:48:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.459 14:48:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:13.459 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.459 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.459 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.459 14:48:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:13.459 14:48:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:13.459 14:48:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:13.459 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.459 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.459 [2024-12-09 14:48:51.368049] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:13.459 [2024-12-09 14:48:51.368301] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:13.459 [2024-12-09 14:48:51.464751] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:13.459 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.459 14:48:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:13.459 14:48:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:13.459 14:48:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.459 14:48:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:13.459 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.459 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.459 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.459 14:48:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:13.459 14:48:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:13.459 14:48:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:13.459 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.459 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.460 [2024-12-09 14:48:51.520746] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:13.736 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.736 14:48:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:13.736 14:48:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:13.736 14:48:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:13.736 14:48:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.736 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.736 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.736 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.736 14:48:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:13.736 14:48:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:13.736 14:48:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:13.736 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.736 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.736 [2024-12-09 14:48:51.681973] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:13.736 [2024-12-09 14:48:51.682028] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:13.736 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.736 14:48:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:13.736 14:48:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:13.736 14:48:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.736 14:48:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:13.736 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.736 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.736 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.736 14:48:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:13.736 14:48:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:13.736 14:48:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:13.736 14:48:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:13.736 14:48:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:13.736 14:48:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:13.736 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.736 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.997 BaseBdev2 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.997 [ 00:16:13.997 { 00:16:13.997 "name": "BaseBdev2", 00:16:13.997 "aliases": [ 00:16:13.997 "db0b2fe3-19ba-48c8-a5e7-2d259c223694" 00:16:13.997 ], 00:16:13.997 "product_name": "Malloc disk", 00:16:13.997 "block_size": 512, 00:16:13.997 "num_blocks": 65536, 00:16:13.997 "uuid": "db0b2fe3-19ba-48c8-a5e7-2d259c223694", 00:16:13.997 "assigned_rate_limits": { 00:16:13.997 "rw_ios_per_sec": 0, 00:16:13.997 "rw_mbytes_per_sec": 0, 00:16:13.997 "r_mbytes_per_sec": 0, 00:16:13.997 "w_mbytes_per_sec": 0 00:16:13.997 }, 00:16:13.997 "claimed": false, 00:16:13.997 "zoned": false, 00:16:13.997 "supported_io_types": { 00:16:13.997 "read": true, 00:16:13.997 "write": true, 00:16:13.997 "unmap": true, 00:16:13.997 "flush": true, 00:16:13.997 "reset": true, 00:16:13.997 "nvme_admin": false, 00:16:13.997 "nvme_io": false, 00:16:13.997 "nvme_io_md": false, 00:16:13.997 "write_zeroes": true, 00:16:13.997 "zcopy": true, 00:16:13.997 "get_zone_info": false, 00:16:13.997 "zone_management": false, 00:16:13.997 "zone_append": false, 00:16:13.997 "compare": false, 00:16:13.997 "compare_and_write": false, 00:16:13.997 "abort": true, 00:16:13.997 "seek_hole": false, 00:16:13.997 "seek_data": false, 00:16:13.997 "copy": true, 00:16:13.997 "nvme_iov_md": false 00:16:13.997 }, 00:16:13.997 "memory_domains": [ 00:16:13.997 { 00:16:13.997 "dma_device_id": "system", 00:16:13.997 "dma_device_type": 1 00:16:13.997 }, 00:16:13.997 { 00:16:13.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.997 "dma_device_type": 2 00:16:13.997 } 00:16:13.997 ], 00:16:13.997 "driver_specific": {} 00:16:13.997 } 00:16:13.997 ] 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.997 BaseBdev3 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.997 [ 00:16:13.997 { 00:16:13.997 "name": "BaseBdev3", 00:16:13.997 "aliases": [ 00:16:13.997 "0f9c564f-d6eb-483f-b79c-67d53eec6b09" 00:16:13.997 ], 00:16:13.997 "product_name": "Malloc disk", 00:16:13.997 "block_size": 512, 00:16:13.997 "num_blocks": 65536, 00:16:13.997 "uuid": "0f9c564f-d6eb-483f-b79c-67d53eec6b09", 00:16:13.997 "assigned_rate_limits": { 00:16:13.997 "rw_ios_per_sec": 0, 00:16:13.997 "rw_mbytes_per_sec": 0, 00:16:13.997 "r_mbytes_per_sec": 0, 00:16:13.997 "w_mbytes_per_sec": 0 00:16:13.997 }, 00:16:13.997 "claimed": false, 00:16:13.997 "zoned": false, 00:16:13.997 "supported_io_types": { 00:16:13.997 "read": true, 00:16:13.997 "write": true, 00:16:13.997 "unmap": true, 00:16:13.997 "flush": true, 00:16:13.997 "reset": true, 00:16:13.997 "nvme_admin": false, 00:16:13.997 "nvme_io": false, 00:16:13.997 "nvme_io_md": false, 00:16:13.997 "write_zeroes": true, 00:16:13.997 "zcopy": true, 00:16:13.997 "get_zone_info": false, 00:16:13.997 "zone_management": false, 00:16:13.997 "zone_append": false, 00:16:13.997 "compare": false, 00:16:13.997 "compare_and_write": false, 00:16:13.997 "abort": true, 00:16:13.997 "seek_hole": false, 00:16:13.997 "seek_data": false, 00:16:13.997 "copy": true, 00:16:13.997 "nvme_iov_md": false 00:16:13.997 }, 00:16:13.997 "memory_domains": [ 00:16:13.997 { 00:16:13.997 "dma_device_id": "system", 00:16:13.997 "dma_device_type": 1 00:16:13.997 }, 00:16:13.997 { 00:16:13.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.997 "dma_device_type": 2 00:16:13.997 } 00:16:13.997 ], 00:16:13.997 "driver_specific": {} 00:16:13.997 } 00:16:13.997 ] 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.997 14:48:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.997 BaseBdev4 00:16:13.997 14:48:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.997 14:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:13.997 14:48:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:13.997 14:48:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:13.997 14:48:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:13.998 14:48:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:13.998 14:48:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:13.998 14:48:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:13.998 14:48:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.998 14:48:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.998 14:48:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.998 14:48:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:13.998 14:48:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.998 14:48:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.998 [ 00:16:13.998 { 00:16:13.998 "name": "BaseBdev4", 00:16:13.998 "aliases": [ 00:16:13.998 "f966d709-167d-41aa-a85b-8488f0e2a9ac" 00:16:13.998 ], 00:16:13.998 "product_name": "Malloc disk", 00:16:13.998 "block_size": 512, 00:16:13.998 "num_blocks": 65536, 00:16:13.998 "uuid": "f966d709-167d-41aa-a85b-8488f0e2a9ac", 00:16:13.998 "assigned_rate_limits": { 00:16:13.998 "rw_ios_per_sec": 0, 00:16:13.998 "rw_mbytes_per_sec": 0, 00:16:13.998 "r_mbytes_per_sec": 0, 00:16:13.998 "w_mbytes_per_sec": 0 00:16:13.998 }, 00:16:13.998 "claimed": false, 00:16:13.998 "zoned": false, 00:16:13.998 "supported_io_types": { 00:16:13.998 "read": true, 00:16:13.998 "write": true, 00:16:13.998 "unmap": true, 00:16:13.998 "flush": true, 00:16:13.998 "reset": true, 00:16:13.998 "nvme_admin": false, 00:16:13.998 "nvme_io": false, 00:16:13.998 "nvme_io_md": false, 00:16:13.998 "write_zeroes": true, 00:16:13.998 "zcopy": true, 00:16:13.998 "get_zone_info": false, 00:16:13.998 "zone_management": false, 00:16:13.998 "zone_append": false, 00:16:13.998 "compare": false, 00:16:13.998 "compare_and_write": false, 00:16:13.998 "abort": true, 00:16:13.998 "seek_hole": false, 00:16:13.998 "seek_data": false, 00:16:13.998 "copy": true, 00:16:13.998 "nvme_iov_md": false 00:16:13.998 }, 00:16:13.998 "memory_domains": [ 00:16:13.998 { 00:16:13.998 "dma_device_id": "system", 00:16:13.998 "dma_device_type": 1 00:16:13.998 }, 00:16:13.998 { 00:16:13.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.998 "dma_device_type": 2 00:16:13.998 } 00:16:13.998 ], 00:16:13.998 "driver_specific": {} 00:16:13.998 } 00:16:13.998 ] 00:16:13.998 14:48:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.998 14:48:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:13.998 14:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:13.998 14:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:13.998 14:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:13.998 14:48:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.998 14:48:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.998 [2024-12-09 14:48:52.077649] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:13.998 [2024-12-09 14:48:52.077747] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:13.998 [2024-12-09 14:48:52.077796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:13.998 [2024-12-09 14:48:52.079762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:13.998 [2024-12-09 14:48:52.079860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:13.998 14:48:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.998 14:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:13.998 14:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:13.998 14:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:13.998 14:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:13.998 14:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:13.998 14:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:13.998 14:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.998 14:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.998 14:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.998 14:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.998 14:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.998 14:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.998 14:48:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.998 14:48:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.998 14:48:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.258 14:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.258 "name": "Existed_Raid", 00:16:14.258 "uuid": "cdb33a62-3fc6-439d-ab66-fa590620251f", 00:16:14.258 "strip_size_kb": 64, 00:16:14.258 "state": "configuring", 00:16:14.258 "raid_level": "raid5f", 00:16:14.258 "superblock": true, 00:16:14.258 "num_base_bdevs": 4, 00:16:14.258 "num_base_bdevs_discovered": 3, 00:16:14.258 "num_base_bdevs_operational": 4, 00:16:14.258 "base_bdevs_list": [ 00:16:14.258 { 00:16:14.258 "name": "BaseBdev1", 00:16:14.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.258 "is_configured": false, 00:16:14.258 "data_offset": 0, 00:16:14.258 "data_size": 0 00:16:14.258 }, 00:16:14.258 { 00:16:14.258 "name": "BaseBdev2", 00:16:14.258 "uuid": "db0b2fe3-19ba-48c8-a5e7-2d259c223694", 00:16:14.258 "is_configured": true, 00:16:14.258 "data_offset": 2048, 00:16:14.258 "data_size": 63488 00:16:14.258 }, 00:16:14.258 { 00:16:14.258 "name": "BaseBdev3", 00:16:14.258 "uuid": "0f9c564f-d6eb-483f-b79c-67d53eec6b09", 00:16:14.258 "is_configured": true, 00:16:14.258 "data_offset": 2048, 00:16:14.258 "data_size": 63488 00:16:14.258 }, 00:16:14.258 { 00:16:14.258 "name": "BaseBdev4", 00:16:14.258 "uuid": "f966d709-167d-41aa-a85b-8488f0e2a9ac", 00:16:14.258 "is_configured": true, 00:16:14.258 "data_offset": 2048, 00:16:14.258 "data_size": 63488 00:16:14.258 } 00:16:14.258 ] 00:16:14.258 }' 00:16:14.258 14:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.258 14:48:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.519 14:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:14.519 14:48:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.519 14:48:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.519 [2024-12-09 14:48:52.548810] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:14.519 14:48:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.519 14:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:14.519 14:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:14.519 14:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:14.519 14:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:14.519 14:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:14.519 14:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:14.519 14:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.519 14:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.519 14:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.519 14:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.519 14:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.519 14:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.519 14:48:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.519 14:48:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.519 14:48:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.519 14:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.519 "name": "Existed_Raid", 00:16:14.519 "uuid": "cdb33a62-3fc6-439d-ab66-fa590620251f", 00:16:14.519 "strip_size_kb": 64, 00:16:14.519 "state": "configuring", 00:16:14.519 "raid_level": "raid5f", 00:16:14.519 "superblock": true, 00:16:14.519 "num_base_bdevs": 4, 00:16:14.519 "num_base_bdevs_discovered": 2, 00:16:14.519 "num_base_bdevs_operational": 4, 00:16:14.519 "base_bdevs_list": [ 00:16:14.519 { 00:16:14.519 "name": "BaseBdev1", 00:16:14.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.519 "is_configured": false, 00:16:14.519 "data_offset": 0, 00:16:14.519 "data_size": 0 00:16:14.519 }, 00:16:14.519 { 00:16:14.519 "name": null, 00:16:14.519 "uuid": "db0b2fe3-19ba-48c8-a5e7-2d259c223694", 00:16:14.519 "is_configured": false, 00:16:14.519 "data_offset": 0, 00:16:14.519 "data_size": 63488 00:16:14.519 }, 00:16:14.519 { 00:16:14.519 "name": "BaseBdev3", 00:16:14.519 "uuid": "0f9c564f-d6eb-483f-b79c-67d53eec6b09", 00:16:14.519 "is_configured": true, 00:16:14.519 "data_offset": 2048, 00:16:14.519 "data_size": 63488 00:16:14.519 }, 00:16:14.519 { 00:16:14.519 "name": "BaseBdev4", 00:16:14.519 "uuid": "f966d709-167d-41aa-a85b-8488f0e2a9ac", 00:16:14.519 "is_configured": true, 00:16:14.519 "data_offset": 2048, 00:16:14.519 "data_size": 63488 00:16:14.519 } 00:16:14.519 ] 00:16:14.519 }' 00:16:14.519 14:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.519 14:48:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.128 14:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.128 14:48:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.128 14:48:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.128 14:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:15.128 14:48:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.128 14:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:15.128 14:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:15.128 14:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.128 14:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.128 [2024-12-09 14:48:53.056252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:15.128 BaseBdev1 00:16:15.128 14:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.128 14:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:15.128 14:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:15.128 14:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:15.128 14:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:15.128 14:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:15.128 14:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:15.128 14:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:15.128 14:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.128 14:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.128 14:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.128 14:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:15.128 14:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.128 14:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.128 [ 00:16:15.128 { 00:16:15.128 "name": "BaseBdev1", 00:16:15.128 "aliases": [ 00:16:15.128 "489be646-85b8-4049-b243-aec613f33946" 00:16:15.128 ], 00:16:15.128 "product_name": "Malloc disk", 00:16:15.128 "block_size": 512, 00:16:15.128 "num_blocks": 65536, 00:16:15.128 "uuid": "489be646-85b8-4049-b243-aec613f33946", 00:16:15.128 "assigned_rate_limits": { 00:16:15.128 "rw_ios_per_sec": 0, 00:16:15.128 "rw_mbytes_per_sec": 0, 00:16:15.128 "r_mbytes_per_sec": 0, 00:16:15.128 "w_mbytes_per_sec": 0 00:16:15.128 }, 00:16:15.128 "claimed": true, 00:16:15.128 "claim_type": "exclusive_write", 00:16:15.128 "zoned": false, 00:16:15.128 "supported_io_types": { 00:16:15.128 "read": true, 00:16:15.128 "write": true, 00:16:15.128 "unmap": true, 00:16:15.128 "flush": true, 00:16:15.128 "reset": true, 00:16:15.128 "nvme_admin": false, 00:16:15.128 "nvme_io": false, 00:16:15.128 "nvme_io_md": false, 00:16:15.128 "write_zeroes": true, 00:16:15.128 "zcopy": true, 00:16:15.128 "get_zone_info": false, 00:16:15.128 "zone_management": false, 00:16:15.128 "zone_append": false, 00:16:15.128 "compare": false, 00:16:15.128 "compare_and_write": false, 00:16:15.128 "abort": true, 00:16:15.128 "seek_hole": false, 00:16:15.128 "seek_data": false, 00:16:15.128 "copy": true, 00:16:15.128 "nvme_iov_md": false 00:16:15.128 }, 00:16:15.128 "memory_domains": [ 00:16:15.128 { 00:16:15.128 "dma_device_id": "system", 00:16:15.128 "dma_device_type": 1 00:16:15.128 }, 00:16:15.128 { 00:16:15.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.128 "dma_device_type": 2 00:16:15.128 } 00:16:15.128 ], 00:16:15.128 "driver_specific": {} 00:16:15.128 } 00:16:15.128 ] 00:16:15.128 14:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.128 14:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:15.128 14:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:15.128 14:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:15.128 14:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:15.128 14:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:15.128 14:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.128 14:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:15.128 14:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.128 14:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.128 14:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.128 14:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.128 14:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.128 14:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.128 14:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.128 14:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.128 14:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.128 14:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.128 "name": "Existed_Raid", 00:16:15.128 "uuid": "cdb33a62-3fc6-439d-ab66-fa590620251f", 00:16:15.128 "strip_size_kb": 64, 00:16:15.128 "state": "configuring", 00:16:15.128 "raid_level": "raid5f", 00:16:15.128 "superblock": true, 00:16:15.128 "num_base_bdevs": 4, 00:16:15.128 "num_base_bdevs_discovered": 3, 00:16:15.128 "num_base_bdevs_operational": 4, 00:16:15.128 "base_bdevs_list": [ 00:16:15.128 { 00:16:15.128 "name": "BaseBdev1", 00:16:15.128 "uuid": "489be646-85b8-4049-b243-aec613f33946", 00:16:15.128 "is_configured": true, 00:16:15.128 "data_offset": 2048, 00:16:15.128 "data_size": 63488 00:16:15.128 }, 00:16:15.128 { 00:16:15.128 "name": null, 00:16:15.128 "uuid": "db0b2fe3-19ba-48c8-a5e7-2d259c223694", 00:16:15.128 "is_configured": false, 00:16:15.128 "data_offset": 0, 00:16:15.128 "data_size": 63488 00:16:15.128 }, 00:16:15.128 { 00:16:15.128 "name": "BaseBdev3", 00:16:15.128 "uuid": "0f9c564f-d6eb-483f-b79c-67d53eec6b09", 00:16:15.128 "is_configured": true, 00:16:15.128 "data_offset": 2048, 00:16:15.128 "data_size": 63488 00:16:15.128 }, 00:16:15.128 { 00:16:15.128 "name": "BaseBdev4", 00:16:15.128 "uuid": "f966d709-167d-41aa-a85b-8488f0e2a9ac", 00:16:15.128 "is_configured": true, 00:16:15.128 "data_offset": 2048, 00:16:15.128 "data_size": 63488 00:16:15.128 } 00:16:15.128 ] 00:16:15.128 }' 00:16:15.128 14:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.128 14:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.698 14:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:15.698 14:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.698 14:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.698 14:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.698 14:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.698 14:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:15.698 14:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:15.698 14:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.698 14:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.698 [2024-12-09 14:48:53.607422] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:15.698 14:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.698 14:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:15.698 14:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:15.698 14:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:15.698 14:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:15.698 14:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.698 14:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:15.698 14:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.698 14:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.698 14:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.698 14:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.698 14:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.698 14:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.698 14:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.698 14:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.698 14:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.698 14:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.698 "name": "Existed_Raid", 00:16:15.698 "uuid": "cdb33a62-3fc6-439d-ab66-fa590620251f", 00:16:15.698 "strip_size_kb": 64, 00:16:15.698 "state": "configuring", 00:16:15.698 "raid_level": "raid5f", 00:16:15.698 "superblock": true, 00:16:15.698 "num_base_bdevs": 4, 00:16:15.698 "num_base_bdevs_discovered": 2, 00:16:15.698 "num_base_bdevs_operational": 4, 00:16:15.698 "base_bdevs_list": [ 00:16:15.698 { 00:16:15.698 "name": "BaseBdev1", 00:16:15.698 "uuid": "489be646-85b8-4049-b243-aec613f33946", 00:16:15.698 "is_configured": true, 00:16:15.698 "data_offset": 2048, 00:16:15.698 "data_size": 63488 00:16:15.698 }, 00:16:15.698 { 00:16:15.698 "name": null, 00:16:15.698 "uuid": "db0b2fe3-19ba-48c8-a5e7-2d259c223694", 00:16:15.698 "is_configured": false, 00:16:15.698 "data_offset": 0, 00:16:15.698 "data_size": 63488 00:16:15.698 }, 00:16:15.698 { 00:16:15.698 "name": null, 00:16:15.698 "uuid": "0f9c564f-d6eb-483f-b79c-67d53eec6b09", 00:16:15.698 "is_configured": false, 00:16:15.698 "data_offset": 0, 00:16:15.698 "data_size": 63488 00:16:15.698 }, 00:16:15.698 { 00:16:15.699 "name": "BaseBdev4", 00:16:15.699 "uuid": "f966d709-167d-41aa-a85b-8488f0e2a9ac", 00:16:15.699 "is_configured": true, 00:16:15.699 "data_offset": 2048, 00:16:15.699 "data_size": 63488 00:16:15.699 } 00:16:15.699 ] 00:16:15.699 }' 00:16:15.699 14:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.699 14:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.958 14:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:15.958 14:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.958 14:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.958 14:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.958 14:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.958 14:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:15.958 14:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:15.958 14:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.958 14:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.958 [2024-12-09 14:48:54.062764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:15.958 14:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.958 14:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:15.958 14:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:15.958 14:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:15.958 14:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:15.958 14:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.958 14:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:15.958 14:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.958 14:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.958 14:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.958 14:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.958 14:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.958 14:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.958 14:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.958 14:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.218 14:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.218 14:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.218 "name": "Existed_Raid", 00:16:16.218 "uuid": "cdb33a62-3fc6-439d-ab66-fa590620251f", 00:16:16.218 "strip_size_kb": 64, 00:16:16.218 "state": "configuring", 00:16:16.218 "raid_level": "raid5f", 00:16:16.218 "superblock": true, 00:16:16.218 "num_base_bdevs": 4, 00:16:16.218 "num_base_bdevs_discovered": 3, 00:16:16.218 "num_base_bdevs_operational": 4, 00:16:16.218 "base_bdevs_list": [ 00:16:16.218 { 00:16:16.218 "name": "BaseBdev1", 00:16:16.218 "uuid": "489be646-85b8-4049-b243-aec613f33946", 00:16:16.218 "is_configured": true, 00:16:16.218 "data_offset": 2048, 00:16:16.218 "data_size": 63488 00:16:16.218 }, 00:16:16.218 { 00:16:16.218 "name": null, 00:16:16.218 "uuid": "db0b2fe3-19ba-48c8-a5e7-2d259c223694", 00:16:16.218 "is_configured": false, 00:16:16.218 "data_offset": 0, 00:16:16.218 "data_size": 63488 00:16:16.218 }, 00:16:16.218 { 00:16:16.218 "name": "BaseBdev3", 00:16:16.218 "uuid": "0f9c564f-d6eb-483f-b79c-67d53eec6b09", 00:16:16.218 "is_configured": true, 00:16:16.218 "data_offset": 2048, 00:16:16.218 "data_size": 63488 00:16:16.218 }, 00:16:16.218 { 00:16:16.218 "name": "BaseBdev4", 00:16:16.218 "uuid": "f966d709-167d-41aa-a85b-8488f0e2a9ac", 00:16:16.218 "is_configured": true, 00:16:16.218 "data_offset": 2048, 00:16:16.218 "data_size": 63488 00:16:16.218 } 00:16:16.218 ] 00:16:16.218 }' 00:16:16.218 14:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.218 14:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.478 14:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.478 14:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:16.478 14:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.478 14:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.478 14:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.478 14:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:16.478 14:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:16.478 14:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.478 14:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.478 [2024-12-09 14:48:54.506017] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:16.737 14:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.737 14:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:16.737 14:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:16.737 14:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:16.737 14:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:16.737 14:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.737 14:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:16.737 14:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.737 14:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.737 14:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.738 14:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.738 14:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.738 14:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.738 14:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.738 14:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.738 14:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.738 14:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.738 "name": "Existed_Raid", 00:16:16.738 "uuid": "cdb33a62-3fc6-439d-ab66-fa590620251f", 00:16:16.738 "strip_size_kb": 64, 00:16:16.738 "state": "configuring", 00:16:16.738 "raid_level": "raid5f", 00:16:16.738 "superblock": true, 00:16:16.738 "num_base_bdevs": 4, 00:16:16.738 "num_base_bdevs_discovered": 2, 00:16:16.738 "num_base_bdevs_operational": 4, 00:16:16.738 "base_bdevs_list": [ 00:16:16.738 { 00:16:16.738 "name": null, 00:16:16.738 "uuid": "489be646-85b8-4049-b243-aec613f33946", 00:16:16.738 "is_configured": false, 00:16:16.738 "data_offset": 0, 00:16:16.738 "data_size": 63488 00:16:16.738 }, 00:16:16.738 { 00:16:16.738 "name": null, 00:16:16.738 "uuid": "db0b2fe3-19ba-48c8-a5e7-2d259c223694", 00:16:16.738 "is_configured": false, 00:16:16.738 "data_offset": 0, 00:16:16.738 "data_size": 63488 00:16:16.738 }, 00:16:16.738 { 00:16:16.738 "name": "BaseBdev3", 00:16:16.738 "uuid": "0f9c564f-d6eb-483f-b79c-67d53eec6b09", 00:16:16.738 "is_configured": true, 00:16:16.738 "data_offset": 2048, 00:16:16.738 "data_size": 63488 00:16:16.738 }, 00:16:16.738 { 00:16:16.738 "name": "BaseBdev4", 00:16:16.738 "uuid": "f966d709-167d-41aa-a85b-8488f0e2a9ac", 00:16:16.738 "is_configured": true, 00:16:16.738 "data_offset": 2048, 00:16:16.738 "data_size": 63488 00:16:16.738 } 00:16:16.738 ] 00:16:16.738 }' 00:16:16.738 14:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.738 14:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.997 14:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.997 14:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:16.997 14:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.997 14:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.997 14:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.997 14:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:16.997 14:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:16.997 14:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.997 14:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.997 [2024-12-09 14:48:55.072123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:16.997 14:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.997 14:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:16.997 14:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:16.997 14:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:16.997 14:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:16.997 14:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.997 14:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:16.997 14:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.997 14:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.997 14:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.997 14:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.997 14:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.997 14:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.997 14:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.997 14:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.997 14:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.256 14:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.256 "name": "Existed_Raid", 00:16:17.256 "uuid": "cdb33a62-3fc6-439d-ab66-fa590620251f", 00:16:17.256 "strip_size_kb": 64, 00:16:17.256 "state": "configuring", 00:16:17.256 "raid_level": "raid5f", 00:16:17.256 "superblock": true, 00:16:17.256 "num_base_bdevs": 4, 00:16:17.256 "num_base_bdevs_discovered": 3, 00:16:17.256 "num_base_bdevs_operational": 4, 00:16:17.256 "base_bdevs_list": [ 00:16:17.256 { 00:16:17.256 "name": null, 00:16:17.256 "uuid": "489be646-85b8-4049-b243-aec613f33946", 00:16:17.256 "is_configured": false, 00:16:17.256 "data_offset": 0, 00:16:17.256 "data_size": 63488 00:16:17.256 }, 00:16:17.256 { 00:16:17.256 "name": "BaseBdev2", 00:16:17.256 "uuid": "db0b2fe3-19ba-48c8-a5e7-2d259c223694", 00:16:17.256 "is_configured": true, 00:16:17.256 "data_offset": 2048, 00:16:17.256 "data_size": 63488 00:16:17.256 }, 00:16:17.256 { 00:16:17.256 "name": "BaseBdev3", 00:16:17.256 "uuid": "0f9c564f-d6eb-483f-b79c-67d53eec6b09", 00:16:17.256 "is_configured": true, 00:16:17.256 "data_offset": 2048, 00:16:17.256 "data_size": 63488 00:16:17.256 }, 00:16:17.256 { 00:16:17.256 "name": "BaseBdev4", 00:16:17.256 "uuid": "f966d709-167d-41aa-a85b-8488f0e2a9ac", 00:16:17.256 "is_configured": true, 00:16:17.256 "data_offset": 2048, 00:16:17.256 "data_size": 63488 00:16:17.256 } 00:16:17.256 ] 00:16:17.256 }' 00:16:17.256 14:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.256 14:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.515 14:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.515 14:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.515 14:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.515 14:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:17.515 14:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.515 14:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:17.515 14:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:17.515 14:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.515 14:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.515 14:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.515 14:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.515 14:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 489be646-85b8-4049-b243-aec613f33946 00:16:17.515 14:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.515 14:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.775 [2024-12-09 14:48:55.644144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:17.775 [2024-12-09 14:48:55.644400] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:17.775 [2024-12-09 14:48:55.644413] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:17.775 [2024-12-09 14:48:55.644705] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:17.775 NewBaseBdev 00:16:17.775 14:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.775 14:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:17.775 14:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:17.775 14:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:17.775 14:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:17.775 14:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:17.775 14:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:17.775 14:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:17.775 14:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.775 14:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.775 [2024-12-09 14:48:55.652289] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:17.775 [2024-12-09 14:48:55.652360] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:17.775 [2024-12-09 14:48:55.652562] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:17.775 14:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.775 14:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:17.775 14:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.775 14:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.775 [ 00:16:17.775 { 00:16:17.775 "name": "NewBaseBdev", 00:16:17.775 "aliases": [ 00:16:17.775 "489be646-85b8-4049-b243-aec613f33946" 00:16:17.775 ], 00:16:17.775 "product_name": "Malloc disk", 00:16:17.775 "block_size": 512, 00:16:17.775 "num_blocks": 65536, 00:16:17.775 "uuid": "489be646-85b8-4049-b243-aec613f33946", 00:16:17.775 "assigned_rate_limits": { 00:16:17.775 "rw_ios_per_sec": 0, 00:16:17.775 "rw_mbytes_per_sec": 0, 00:16:17.775 "r_mbytes_per_sec": 0, 00:16:17.775 "w_mbytes_per_sec": 0 00:16:17.775 }, 00:16:17.775 "claimed": true, 00:16:17.775 "claim_type": "exclusive_write", 00:16:17.775 "zoned": false, 00:16:17.775 "supported_io_types": { 00:16:17.775 "read": true, 00:16:17.775 "write": true, 00:16:17.775 "unmap": true, 00:16:17.775 "flush": true, 00:16:17.775 "reset": true, 00:16:17.775 "nvme_admin": false, 00:16:17.775 "nvme_io": false, 00:16:17.775 "nvme_io_md": false, 00:16:17.775 "write_zeroes": true, 00:16:17.775 "zcopy": true, 00:16:17.775 "get_zone_info": false, 00:16:17.775 "zone_management": false, 00:16:17.775 "zone_append": false, 00:16:17.775 "compare": false, 00:16:17.775 "compare_and_write": false, 00:16:17.775 "abort": true, 00:16:17.775 "seek_hole": false, 00:16:17.775 "seek_data": false, 00:16:17.775 "copy": true, 00:16:17.775 "nvme_iov_md": false 00:16:17.775 }, 00:16:17.775 "memory_domains": [ 00:16:17.775 { 00:16:17.775 "dma_device_id": "system", 00:16:17.775 "dma_device_type": 1 00:16:17.775 }, 00:16:17.775 { 00:16:17.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.775 "dma_device_type": 2 00:16:17.775 } 00:16:17.775 ], 00:16:17.775 "driver_specific": {} 00:16:17.775 } 00:16:17.775 ] 00:16:17.775 14:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.775 14:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:17.775 14:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:17.775 14:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:17.775 14:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:17.775 14:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:17.775 14:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.775 14:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:17.775 14:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.775 14:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.775 14:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.775 14:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.775 14:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.775 14:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.775 14:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.775 14:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.775 14:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.775 14:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.775 "name": "Existed_Raid", 00:16:17.775 "uuid": "cdb33a62-3fc6-439d-ab66-fa590620251f", 00:16:17.775 "strip_size_kb": 64, 00:16:17.775 "state": "online", 00:16:17.775 "raid_level": "raid5f", 00:16:17.775 "superblock": true, 00:16:17.775 "num_base_bdevs": 4, 00:16:17.775 "num_base_bdevs_discovered": 4, 00:16:17.775 "num_base_bdevs_operational": 4, 00:16:17.775 "base_bdevs_list": [ 00:16:17.775 { 00:16:17.775 "name": "NewBaseBdev", 00:16:17.775 "uuid": "489be646-85b8-4049-b243-aec613f33946", 00:16:17.775 "is_configured": true, 00:16:17.775 "data_offset": 2048, 00:16:17.775 "data_size": 63488 00:16:17.775 }, 00:16:17.775 { 00:16:17.775 "name": "BaseBdev2", 00:16:17.775 "uuid": "db0b2fe3-19ba-48c8-a5e7-2d259c223694", 00:16:17.775 "is_configured": true, 00:16:17.775 "data_offset": 2048, 00:16:17.775 "data_size": 63488 00:16:17.775 }, 00:16:17.775 { 00:16:17.775 "name": "BaseBdev3", 00:16:17.775 "uuid": "0f9c564f-d6eb-483f-b79c-67d53eec6b09", 00:16:17.775 "is_configured": true, 00:16:17.775 "data_offset": 2048, 00:16:17.775 "data_size": 63488 00:16:17.775 }, 00:16:17.775 { 00:16:17.775 "name": "BaseBdev4", 00:16:17.775 "uuid": "f966d709-167d-41aa-a85b-8488f0e2a9ac", 00:16:17.775 "is_configured": true, 00:16:17.775 "data_offset": 2048, 00:16:17.775 "data_size": 63488 00:16:17.775 } 00:16:17.775 ] 00:16:17.775 }' 00:16:17.775 14:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.775 14:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.035 14:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:18.035 14:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:18.035 14:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:18.035 14:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:18.035 14:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:18.035 14:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:18.035 14:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:18.035 14:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.035 14:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.035 14:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:18.035 [2024-12-09 14:48:56.096376] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:18.035 14:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.035 14:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:18.035 "name": "Existed_Raid", 00:16:18.035 "aliases": [ 00:16:18.035 "cdb33a62-3fc6-439d-ab66-fa590620251f" 00:16:18.035 ], 00:16:18.035 "product_name": "Raid Volume", 00:16:18.035 "block_size": 512, 00:16:18.035 "num_blocks": 190464, 00:16:18.035 "uuid": "cdb33a62-3fc6-439d-ab66-fa590620251f", 00:16:18.035 "assigned_rate_limits": { 00:16:18.035 "rw_ios_per_sec": 0, 00:16:18.035 "rw_mbytes_per_sec": 0, 00:16:18.035 "r_mbytes_per_sec": 0, 00:16:18.035 "w_mbytes_per_sec": 0 00:16:18.035 }, 00:16:18.035 "claimed": false, 00:16:18.035 "zoned": false, 00:16:18.035 "supported_io_types": { 00:16:18.035 "read": true, 00:16:18.035 "write": true, 00:16:18.035 "unmap": false, 00:16:18.035 "flush": false, 00:16:18.035 "reset": true, 00:16:18.035 "nvme_admin": false, 00:16:18.035 "nvme_io": false, 00:16:18.035 "nvme_io_md": false, 00:16:18.035 "write_zeroes": true, 00:16:18.035 "zcopy": false, 00:16:18.035 "get_zone_info": false, 00:16:18.035 "zone_management": false, 00:16:18.035 "zone_append": false, 00:16:18.035 "compare": false, 00:16:18.035 "compare_and_write": false, 00:16:18.035 "abort": false, 00:16:18.035 "seek_hole": false, 00:16:18.035 "seek_data": false, 00:16:18.035 "copy": false, 00:16:18.035 "nvme_iov_md": false 00:16:18.035 }, 00:16:18.035 "driver_specific": { 00:16:18.035 "raid": { 00:16:18.035 "uuid": "cdb33a62-3fc6-439d-ab66-fa590620251f", 00:16:18.035 "strip_size_kb": 64, 00:16:18.035 "state": "online", 00:16:18.035 "raid_level": "raid5f", 00:16:18.035 "superblock": true, 00:16:18.035 "num_base_bdevs": 4, 00:16:18.035 "num_base_bdevs_discovered": 4, 00:16:18.036 "num_base_bdevs_operational": 4, 00:16:18.036 "base_bdevs_list": [ 00:16:18.036 { 00:16:18.036 "name": "NewBaseBdev", 00:16:18.036 "uuid": "489be646-85b8-4049-b243-aec613f33946", 00:16:18.036 "is_configured": true, 00:16:18.036 "data_offset": 2048, 00:16:18.036 "data_size": 63488 00:16:18.036 }, 00:16:18.036 { 00:16:18.036 "name": "BaseBdev2", 00:16:18.036 "uuid": "db0b2fe3-19ba-48c8-a5e7-2d259c223694", 00:16:18.036 "is_configured": true, 00:16:18.036 "data_offset": 2048, 00:16:18.036 "data_size": 63488 00:16:18.036 }, 00:16:18.036 { 00:16:18.036 "name": "BaseBdev3", 00:16:18.036 "uuid": "0f9c564f-d6eb-483f-b79c-67d53eec6b09", 00:16:18.036 "is_configured": true, 00:16:18.036 "data_offset": 2048, 00:16:18.036 "data_size": 63488 00:16:18.036 }, 00:16:18.036 { 00:16:18.036 "name": "BaseBdev4", 00:16:18.036 "uuid": "f966d709-167d-41aa-a85b-8488f0e2a9ac", 00:16:18.036 "is_configured": true, 00:16:18.036 "data_offset": 2048, 00:16:18.036 "data_size": 63488 00:16:18.036 } 00:16:18.036 ] 00:16:18.036 } 00:16:18.036 } 00:16:18.036 }' 00:16:18.036 14:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:18.296 14:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:18.296 BaseBdev2 00:16:18.296 BaseBdev3 00:16:18.296 BaseBdev4' 00:16:18.296 14:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:18.296 14:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:18.296 14:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:18.296 14:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:18.296 14:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:18.296 14:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.296 14:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.296 14:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.296 14:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:18.296 14:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:18.296 14:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:18.296 14:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:18.296 14:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.296 14:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.296 14:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:18.296 14:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.296 14:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:18.296 14:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:18.296 14:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:18.296 14:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:18.296 14:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:18.296 14:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.296 14:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.296 14:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.296 14:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:18.296 14:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:18.296 14:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:18.296 14:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:18.296 14:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.296 14:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.296 14:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:18.296 14:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.557 14:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:18.558 14:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:18.558 14:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:18.558 14:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.558 14:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.558 [2024-12-09 14:48:56.427556] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:18.558 [2024-12-09 14:48:56.427594] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:18.558 [2024-12-09 14:48:56.427674] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:18.558 [2024-12-09 14:48:56.427974] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:18.558 [2024-12-09 14:48:56.427986] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:18.558 14:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.558 14:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84761 00:16:18.558 14:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 84761 ']' 00:16:18.558 14:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 84761 00:16:18.558 14:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:18.558 14:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:18.558 14:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84761 00:16:18.558 14:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:18.558 14:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:18.558 killing process with pid 84761 00:16:18.558 14:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84761' 00:16:18.558 14:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 84761 00:16:18.558 [2024-12-09 14:48:56.477113] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:18.558 14:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 84761 00:16:18.817 [2024-12-09 14:48:56.866314] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:20.203 14:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:20.203 00:16:20.203 real 0m11.496s 00:16:20.203 user 0m18.230s 00:16:20.203 sys 0m2.136s 00:16:20.203 14:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:20.203 14:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.203 ************************************ 00:16:20.203 END TEST raid5f_state_function_test_sb 00:16:20.203 ************************************ 00:16:20.203 14:48:58 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:16:20.203 14:48:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:20.203 14:48:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:20.203 14:48:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:20.203 ************************************ 00:16:20.203 START TEST raid5f_superblock_test 00:16:20.203 ************************************ 00:16:20.203 14:48:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:16:20.203 14:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:20.203 14:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:20.203 14:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:20.203 14:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:20.203 14:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:20.203 14:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:20.203 14:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:20.203 14:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:20.203 14:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:20.203 14:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:20.203 14:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:20.203 14:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:20.203 14:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:20.203 14:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:20.203 14:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:20.203 14:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:20.203 14:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=85426 00:16:20.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.203 14:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 85426 00:16:20.203 14:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:20.203 14:48:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 85426 ']' 00:16:20.203 14:48:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.203 14:48:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:20.203 14:48:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.203 14:48:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:20.203 14:48:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.203 [2024-12-09 14:48:58.134568] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:16:20.203 [2024-12-09 14:48:58.134771] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85426 ] 00:16:20.203 [2024-12-09 14:48:58.309441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.463 [2024-12-09 14:48:58.419953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.722 [2024-12-09 14:48:58.621657] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:20.723 [2024-12-09 14:48:58.621743] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:20.983 14:48:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:20.983 14:48:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:16:20.983 14:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:20.983 14:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:20.983 14:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:20.983 14:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:20.983 14:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:20.983 14:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:20.983 14:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:20.983 14:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:20.983 14:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:20.983 14:48:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.983 14:48:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.983 malloc1 00:16:20.983 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.983 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:20.983 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.983 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.983 [2024-12-09 14:48:59.016661] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:20.983 [2024-12-09 14:48:59.016739] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.983 [2024-12-09 14:48:59.016762] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:20.983 [2024-12-09 14:48:59.016772] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.983 [2024-12-09 14:48:59.019061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.983 [2024-12-09 14:48:59.019104] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:20.983 pt1 00:16:20.983 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.983 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:20.983 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:20.983 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:20.983 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:20.983 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:20.983 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:20.983 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:20.983 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:20.983 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:20.983 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.983 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.983 malloc2 00:16:20.983 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.983 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:20.983 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.983 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.983 [2024-12-09 14:48:59.071307] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:20.983 [2024-12-09 14:48:59.071402] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.983 [2024-12-09 14:48:59.071444] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:20.983 [2024-12-09 14:48:59.071473] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.983 [2024-12-09 14:48:59.073534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.983 [2024-12-09 14:48:59.073631] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:20.983 pt2 00:16:20.983 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.983 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:20.983 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:20.983 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:20.983 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:20.983 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:20.983 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:20.983 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:20.983 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:20.983 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:20.983 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.983 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.246 malloc3 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.246 [2024-12-09 14:48:59.142013] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:21.246 [2024-12-09 14:48:59.142112] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.246 [2024-12-09 14:48:59.142153] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:21.246 [2024-12-09 14:48:59.142201] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.246 [2024-12-09 14:48:59.144445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.246 [2024-12-09 14:48:59.144518] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:21.246 pt3 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.246 malloc4 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.246 [2024-12-09 14:48:59.200403] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:21.246 [2024-12-09 14:48:59.200459] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.246 [2024-12-09 14:48:59.200480] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:21.246 [2024-12-09 14:48:59.200488] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.246 [2024-12-09 14:48:59.202603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.246 [2024-12-09 14:48:59.202634] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:21.246 pt4 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.246 [2024-12-09 14:48:59.212432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:21.246 [2024-12-09 14:48:59.214206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:21.246 [2024-12-09 14:48:59.214304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:21.246 [2024-12-09 14:48:59.214350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:21.246 [2024-12-09 14:48:59.214535] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:21.246 [2024-12-09 14:48:59.214550] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:21.246 [2024-12-09 14:48:59.214825] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:21.246 [2024-12-09 14:48:59.222870] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:21.246 [2024-12-09 14:48:59.222892] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:21.246 [2024-12-09 14:48:59.223083] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.246 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.246 "name": "raid_bdev1", 00:16:21.246 "uuid": "71ec3629-c1bd-4cf2-bad5-679081d3dd30", 00:16:21.246 "strip_size_kb": 64, 00:16:21.246 "state": "online", 00:16:21.246 "raid_level": "raid5f", 00:16:21.246 "superblock": true, 00:16:21.246 "num_base_bdevs": 4, 00:16:21.246 "num_base_bdevs_discovered": 4, 00:16:21.246 "num_base_bdevs_operational": 4, 00:16:21.246 "base_bdevs_list": [ 00:16:21.246 { 00:16:21.246 "name": "pt1", 00:16:21.246 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:21.246 "is_configured": true, 00:16:21.246 "data_offset": 2048, 00:16:21.246 "data_size": 63488 00:16:21.246 }, 00:16:21.246 { 00:16:21.246 "name": "pt2", 00:16:21.246 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:21.246 "is_configured": true, 00:16:21.246 "data_offset": 2048, 00:16:21.246 "data_size": 63488 00:16:21.246 }, 00:16:21.246 { 00:16:21.246 "name": "pt3", 00:16:21.246 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:21.246 "is_configured": true, 00:16:21.246 "data_offset": 2048, 00:16:21.246 "data_size": 63488 00:16:21.246 }, 00:16:21.246 { 00:16:21.246 "name": "pt4", 00:16:21.247 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:21.247 "is_configured": true, 00:16:21.247 "data_offset": 2048, 00:16:21.247 "data_size": 63488 00:16:21.247 } 00:16:21.247 ] 00:16:21.247 }' 00:16:21.247 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.247 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.818 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:21.818 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:21.818 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:21.818 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:21.818 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:21.819 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:21.819 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:21.819 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:21.819 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.819 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.819 [2024-12-09 14:48:59.655581] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:21.819 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.819 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:21.819 "name": "raid_bdev1", 00:16:21.819 "aliases": [ 00:16:21.819 "71ec3629-c1bd-4cf2-bad5-679081d3dd30" 00:16:21.819 ], 00:16:21.819 "product_name": "Raid Volume", 00:16:21.819 "block_size": 512, 00:16:21.819 "num_blocks": 190464, 00:16:21.819 "uuid": "71ec3629-c1bd-4cf2-bad5-679081d3dd30", 00:16:21.819 "assigned_rate_limits": { 00:16:21.819 "rw_ios_per_sec": 0, 00:16:21.819 "rw_mbytes_per_sec": 0, 00:16:21.819 "r_mbytes_per_sec": 0, 00:16:21.819 "w_mbytes_per_sec": 0 00:16:21.819 }, 00:16:21.819 "claimed": false, 00:16:21.819 "zoned": false, 00:16:21.819 "supported_io_types": { 00:16:21.819 "read": true, 00:16:21.819 "write": true, 00:16:21.819 "unmap": false, 00:16:21.819 "flush": false, 00:16:21.819 "reset": true, 00:16:21.819 "nvme_admin": false, 00:16:21.819 "nvme_io": false, 00:16:21.819 "nvme_io_md": false, 00:16:21.819 "write_zeroes": true, 00:16:21.819 "zcopy": false, 00:16:21.819 "get_zone_info": false, 00:16:21.819 "zone_management": false, 00:16:21.819 "zone_append": false, 00:16:21.819 "compare": false, 00:16:21.819 "compare_and_write": false, 00:16:21.819 "abort": false, 00:16:21.819 "seek_hole": false, 00:16:21.819 "seek_data": false, 00:16:21.819 "copy": false, 00:16:21.819 "nvme_iov_md": false 00:16:21.819 }, 00:16:21.819 "driver_specific": { 00:16:21.819 "raid": { 00:16:21.819 "uuid": "71ec3629-c1bd-4cf2-bad5-679081d3dd30", 00:16:21.819 "strip_size_kb": 64, 00:16:21.819 "state": "online", 00:16:21.819 "raid_level": "raid5f", 00:16:21.819 "superblock": true, 00:16:21.819 "num_base_bdevs": 4, 00:16:21.819 "num_base_bdevs_discovered": 4, 00:16:21.819 "num_base_bdevs_operational": 4, 00:16:21.819 "base_bdevs_list": [ 00:16:21.819 { 00:16:21.819 "name": "pt1", 00:16:21.819 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:21.819 "is_configured": true, 00:16:21.819 "data_offset": 2048, 00:16:21.819 "data_size": 63488 00:16:21.819 }, 00:16:21.819 { 00:16:21.819 "name": "pt2", 00:16:21.819 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:21.819 "is_configured": true, 00:16:21.819 "data_offset": 2048, 00:16:21.819 "data_size": 63488 00:16:21.819 }, 00:16:21.819 { 00:16:21.819 "name": "pt3", 00:16:21.819 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:21.819 "is_configured": true, 00:16:21.819 "data_offset": 2048, 00:16:21.819 "data_size": 63488 00:16:21.819 }, 00:16:21.819 { 00:16:21.819 "name": "pt4", 00:16:21.819 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:21.819 "is_configured": true, 00:16:21.819 "data_offset": 2048, 00:16:21.819 "data_size": 63488 00:16:21.819 } 00:16:21.819 ] 00:16:21.819 } 00:16:21.819 } 00:16:21.819 }' 00:16:21.819 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:21.819 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:21.819 pt2 00:16:21.819 pt3 00:16:21.819 pt4' 00:16:21.819 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.819 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:21.819 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:21.819 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:21.819 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.819 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.819 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.819 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.819 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:21.819 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:21.819 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:21.819 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:21.819 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.819 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.819 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.819 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.819 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:21.819 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:21.819 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:21.819 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:21.819 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.819 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.819 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.819 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.079 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:22.079 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:22.079 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:22.079 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:22.079 14:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:22.079 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.079 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.079 14:48:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.079 [2024-12-09 14:49:00.014968] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=71ec3629-c1bd-4cf2-bad5-679081d3dd30 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 71ec3629-c1bd-4cf2-bad5-679081d3dd30 ']' 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.079 [2024-12-09 14:49:00.062721] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:22.079 [2024-12-09 14:49:00.062809] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:22.079 [2024-12-09 14:49:00.062921] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:22.079 [2024-12-09 14:49:00.063024] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:22.079 [2024-12-09 14:49:00.063073] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.079 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.340 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.340 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:22.340 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:22.340 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:22.340 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:22.340 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:22.340 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:22.340 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:22.340 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:22.340 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:22.340 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.340 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.340 [2024-12-09 14:49:00.226459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:22.340 [2024-12-09 14:49:00.228420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:22.340 [2024-12-09 14:49:00.228513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:22.340 [2024-12-09 14:49:00.228587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:22.340 [2024-12-09 14:49:00.228667] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:22.340 [2024-12-09 14:49:00.228759] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:22.340 [2024-12-09 14:49:00.228813] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:22.340 [2024-12-09 14:49:00.228867] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:22.340 [2024-12-09 14:49:00.228913] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:22.340 [2024-12-09 14:49:00.228943] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:22.340 request: 00:16:22.340 { 00:16:22.340 "name": "raid_bdev1", 00:16:22.340 "raid_level": "raid5f", 00:16:22.340 "base_bdevs": [ 00:16:22.340 "malloc1", 00:16:22.340 "malloc2", 00:16:22.340 "malloc3", 00:16:22.340 "malloc4" 00:16:22.340 ], 00:16:22.340 "strip_size_kb": 64, 00:16:22.340 "superblock": false, 00:16:22.340 "method": "bdev_raid_create", 00:16:22.340 "req_id": 1 00:16:22.340 } 00:16:22.340 Got JSON-RPC error response 00:16:22.340 response: 00:16:22.340 { 00:16:22.340 "code": -17, 00:16:22.340 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:22.340 } 00:16:22.340 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:22.340 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:22.340 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:22.340 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:22.340 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:22.340 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.340 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.340 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.340 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:22.340 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.340 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:22.340 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:22.340 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:22.340 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.340 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.340 [2024-12-09 14:49:00.294294] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:22.340 [2024-12-09 14:49:00.294377] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.340 [2024-12-09 14:49:00.294396] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:22.340 [2024-12-09 14:49:00.294407] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.341 [2024-12-09 14:49:00.296627] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.341 [2024-12-09 14:49:00.296663] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:22.341 [2024-12-09 14:49:00.296771] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:22.341 [2024-12-09 14:49:00.296832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:22.341 pt1 00:16:22.341 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.341 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:22.341 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.341 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.341 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.341 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.341 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:22.341 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.341 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.341 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.341 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.341 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.341 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.341 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.341 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.341 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.341 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.341 "name": "raid_bdev1", 00:16:22.341 "uuid": "71ec3629-c1bd-4cf2-bad5-679081d3dd30", 00:16:22.341 "strip_size_kb": 64, 00:16:22.341 "state": "configuring", 00:16:22.341 "raid_level": "raid5f", 00:16:22.341 "superblock": true, 00:16:22.341 "num_base_bdevs": 4, 00:16:22.341 "num_base_bdevs_discovered": 1, 00:16:22.341 "num_base_bdevs_operational": 4, 00:16:22.341 "base_bdevs_list": [ 00:16:22.341 { 00:16:22.341 "name": "pt1", 00:16:22.341 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:22.341 "is_configured": true, 00:16:22.341 "data_offset": 2048, 00:16:22.341 "data_size": 63488 00:16:22.341 }, 00:16:22.341 { 00:16:22.341 "name": null, 00:16:22.341 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:22.341 "is_configured": false, 00:16:22.341 "data_offset": 2048, 00:16:22.341 "data_size": 63488 00:16:22.341 }, 00:16:22.341 { 00:16:22.341 "name": null, 00:16:22.341 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:22.341 "is_configured": false, 00:16:22.341 "data_offset": 2048, 00:16:22.341 "data_size": 63488 00:16:22.341 }, 00:16:22.341 { 00:16:22.341 "name": null, 00:16:22.341 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:22.341 "is_configured": false, 00:16:22.341 "data_offset": 2048, 00:16:22.341 "data_size": 63488 00:16:22.341 } 00:16:22.341 ] 00:16:22.341 }' 00:16:22.341 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.341 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.911 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:22.911 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:22.911 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.911 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.911 [2024-12-09 14:49:00.757529] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:22.911 [2024-12-09 14:49:00.757694] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.911 [2024-12-09 14:49:00.757736] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:22.911 [2024-12-09 14:49:00.757766] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.911 [2024-12-09 14:49:00.758227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.911 [2024-12-09 14:49:00.758295] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:22.911 [2024-12-09 14:49:00.758417] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:22.911 [2024-12-09 14:49:00.758475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:22.911 pt2 00:16:22.911 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.911 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:22.911 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.911 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.911 [2024-12-09 14:49:00.769492] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:22.911 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.911 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:22.911 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.911 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.911 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.911 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.911 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:22.911 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.911 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.911 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.911 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.911 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.911 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.911 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.911 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.911 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.911 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.911 "name": "raid_bdev1", 00:16:22.911 "uuid": "71ec3629-c1bd-4cf2-bad5-679081d3dd30", 00:16:22.911 "strip_size_kb": 64, 00:16:22.911 "state": "configuring", 00:16:22.911 "raid_level": "raid5f", 00:16:22.911 "superblock": true, 00:16:22.911 "num_base_bdevs": 4, 00:16:22.911 "num_base_bdevs_discovered": 1, 00:16:22.911 "num_base_bdevs_operational": 4, 00:16:22.911 "base_bdevs_list": [ 00:16:22.911 { 00:16:22.911 "name": "pt1", 00:16:22.911 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:22.911 "is_configured": true, 00:16:22.911 "data_offset": 2048, 00:16:22.911 "data_size": 63488 00:16:22.911 }, 00:16:22.911 { 00:16:22.911 "name": null, 00:16:22.911 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:22.911 "is_configured": false, 00:16:22.911 "data_offset": 0, 00:16:22.911 "data_size": 63488 00:16:22.911 }, 00:16:22.911 { 00:16:22.911 "name": null, 00:16:22.911 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:22.911 "is_configured": false, 00:16:22.911 "data_offset": 2048, 00:16:22.911 "data_size": 63488 00:16:22.911 }, 00:16:22.911 { 00:16:22.911 "name": null, 00:16:22.911 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:22.912 "is_configured": false, 00:16:22.912 "data_offset": 2048, 00:16:22.912 "data_size": 63488 00:16:22.912 } 00:16:22.912 ] 00:16:22.912 }' 00:16:22.912 14:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.912 14:49:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.172 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:23.172 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:23.172 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:23.172 14:49:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.172 14:49:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.172 [2024-12-09 14:49:01.148836] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:23.172 [2024-12-09 14:49:01.148903] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.172 [2024-12-09 14:49:01.148923] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:23.172 [2024-12-09 14:49:01.148932] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.172 [2024-12-09 14:49:01.149360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.172 [2024-12-09 14:49:01.149377] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:23.172 [2024-12-09 14:49:01.149456] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:23.172 [2024-12-09 14:49:01.149476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:23.172 pt2 00:16:23.172 14:49:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.172 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:23.172 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:23.172 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:23.172 14:49:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.172 14:49:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.172 [2024-12-09 14:49:01.160788] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:23.172 [2024-12-09 14:49:01.160839] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.172 [2024-12-09 14:49:01.160862] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:23.172 [2024-12-09 14:49:01.160872] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.172 [2024-12-09 14:49:01.161223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.172 [2024-12-09 14:49:01.161238] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:23.172 [2024-12-09 14:49:01.161297] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:23.172 [2024-12-09 14:49:01.161320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:23.173 pt3 00:16:23.173 14:49:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.173 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:23.173 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:23.173 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:23.173 14:49:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.173 14:49:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.173 [2024-12-09 14:49:01.172760] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:23.173 [2024-12-09 14:49:01.172799] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.173 [2024-12-09 14:49:01.172831] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:23.173 [2024-12-09 14:49:01.172838] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.173 [2024-12-09 14:49:01.173185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.173 [2024-12-09 14:49:01.173200] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:23.173 [2024-12-09 14:49:01.173255] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:23.173 [2024-12-09 14:49:01.173272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:23.173 [2024-12-09 14:49:01.173398] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:23.173 [2024-12-09 14:49:01.173405] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:23.173 [2024-12-09 14:49:01.173660] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:23.173 [2024-12-09 14:49:01.180648] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:23.173 [2024-12-09 14:49:01.180671] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:23.173 [2024-12-09 14:49:01.180848] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:23.173 pt4 00:16:23.173 14:49:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.173 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:23.173 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:23.173 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:23.173 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:23.173 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.173 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.173 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.173 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:23.173 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.173 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.173 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.173 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.173 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.173 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.173 14:49:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.173 14:49:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.173 14:49:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.173 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.173 "name": "raid_bdev1", 00:16:23.173 "uuid": "71ec3629-c1bd-4cf2-bad5-679081d3dd30", 00:16:23.173 "strip_size_kb": 64, 00:16:23.173 "state": "online", 00:16:23.173 "raid_level": "raid5f", 00:16:23.173 "superblock": true, 00:16:23.173 "num_base_bdevs": 4, 00:16:23.173 "num_base_bdevs_discovered": 4, 00:16:23.173 "num_base_bdevs_operational": 4, 00:16:23.173 "base_bdevs_list": [ 00:16:23.173 { 00:16:23.173 "name": "pt1", 00:16:23.173 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:23.173 "is_configured": true, 00:16:23.173 "data_offset": 2048, 00:16:23.173 "data_size": 63488 00:16:23.173 }, 00:16:23.173 { 00:16:23.173 "name": "pt2", 00:16:23.173 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:23.173 "is_configured": true, 00:16:23.173 "data_offset": 2048, 00:16:23.173 "data_size": 63488 00:16:23.173 }, 00:16:23.173 { 00:16:23.173 "name": "pt3", 00:16:23.173 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:23.173 "is_configured": true, 00:16:23.173 "data_offset": 2048, 00:16:23.173 "data_size": 63488 00:16:23.173 }, 00:16:23.173 { 00:16:23.173 "name": "pt4", 00:16:23.173 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:23.173 "is_configured": true, 00:16:23.173 "data_offset": 2048, 00:16:23.173 "data_size": 63488 00:16:23.173 } 00:16:23.173 ] 00:16:23.173 }' 00:16:23.173 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.173 14:49:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.743 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:23.743 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:23.743 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:23.743 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:23.743 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:23.743 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:23.743 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:23.743 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:23.743 14:49:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.743 14:49:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.743 [2024-12-09 14:49:01.689525] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:23.743 14:49:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.743 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:23.743 "name": "raid_bdev1", 00:16:23.743 "aliases": [ 00:16:23.743 "71ec3629-c1bd-4cf2-bad5-679081d3dd30" 00:16:23.743 ], 00:16:23.743 "product_name": "Raid Volume", 00:16:23.743 "block_size": 512, 00:16:23.743 "num_blocks": 190464, 00:16:23.743 "uuid": "71ec3629-c1bd-4cf2-bad5-679081d3dd30", 00:16:23.743 "assigned_rate_limits": { 00:16:23.743 "rw_ios_per_sec": 0, 00:16:23.743 "rw_mbytes_per_sec": 0, 00:16:23.743 "r_mbytes_per_sec": 0, 00:16:23.743 "w_mbytes_per_sec": 0 00:16:23.743 }, 00:16:23.743 "claimed": false, 00:16:23.743 "zoned": false, 00:16:23.743 "supported_io_types": { 00:16:23.743 "read": true, 00:16:23.743 "write": true, 00:16:23.743 "unmap": false, 00:16:23.744 "flush": false, 00:16:23.744 "reset": true, 00:16:23.744 "nvme_admin": false, 00:16:23.744 "nvme_io": false, 00:16:23.744 "nvme_io_md": false, 00:16:23.744 "write_zeroes": true, 00:16:23.744 "zcopy": false, 00:16:23.744 "get_zone_info": false, 00:16:23.744 "zone_management": false, 00:16:23.744 "zone_append": false, 00:16:23.744 "compare": false, 00:16:23.744 "compare_and_write": false, 00:16:23.744 "abort": false, 00:16:23.744 "seek_hole": false, 00:16:23.744 "seek_data": false, 00:16:23.744 "copy": false, 00:16:23.744 "nvme_iov_md": false 00:16:23.744 }, 00:16:23.744 "driver_specific": { 00:16:23.744 "raid": { 00:16:23.744 "uuid": "71ec3629-c1bd-4cf2-bad5-679081d3dd30", 00:16:23.744 "strip_size_kb": 64, 00:16:23.744 "state": "online", 00:16:23.744 "raid_level": "raid5f", 00:16:23.744 "superblock": true, 00:16:23.744 "num_base_bdevs": 4, 00:16:23.744 "num_base_bdevs_discovered": 4, 00:16:23.744 "num_base_bdevs_operational": 4, 00:16:23.744 "base_bdevs_list": [ 00:16:23.744 { 00:16:23.744 "name": "pt1", 00:16:23.744 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:23.744 "is_configured": true, 00:16:23.744 "data_offset": 2048, 00:16:23.744 "data_size": 63488 00:16:23.744 }, 00:16:23.744 { 00:16:23.744 "name": "pt2", 00:16:23.744 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:23.744 "is_configured": true, 00:16:23.744 "data_offset": 2048, 00:16:23.744 "data_size": 63488 00:16:23.744 }, 00:16:23.744 { 00:16:23.744 "name": "pt3", 00:16:23.744 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:23.744 "is_configured": true, 00:16:23.744 "data_offset": 2048, 00:16:23.744 "data_size": 63488 00:16:23.744 }, 00:16:23.744 { 00:16:23.744 "name": "pt4", 00:16:23.744 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:23.744 "is_configured": true, 00:16:23.744 "data_offset": 2048, 00:16:23.744 "data_size": 63488 00:16:23.744 } 00:16:23.744 ] 00:16:23.744 } 00:16:23.744 } 00:16:23.744 }' 00:16:23.744 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:23.744 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:23.744 pt2 00:16:23.744 pt3 00:16:23.744 pt4' 00:16:23.744 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:23.744 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:23.744 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:23.744 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:23.744 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:23.744 14:49:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.744 14:49:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.744 14:49:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.004 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:24.004 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:24.004 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:24.004 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:24.004 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:24.004 14:49:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.004 14:49:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.004 14:49:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.004 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:24.004 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:24.004 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:24.004 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:24.004 14:49:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.004 14:49:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.004 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:24.004 14:49:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.004 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:24.004 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:24.004 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:24.004 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:24.004 14:49:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.004 14:49:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.004 14:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:24.004 14:49:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.004 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:24.004 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:24.004 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:24.004 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:24.004 14:49:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.004 14:49:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.004 [2024-12-09 14:49:02.032899] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:24.004 14:49:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.004 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 71ec3629-c1bd-4cf2-bad5-679081d3dd30 '!=' 71ec3629-c1bd-4cf2-bad5-679081d3dd30 ']' 00:16:24.004 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:24.004 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:24.004 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:24.004 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:24.004 14:49:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.004 14:49:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.004 [2024-12-09 14:49:02.076720] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:24.004 14:49:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.004 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:24.004 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:24.004 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:24.004 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:24.004 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.004 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:24.004 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.004 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.004 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.004 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.004 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.004 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.004 14:49:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.004 14:49:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.004 14:49:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.264 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.264 "name": "raid_bdev1", 00:16:24.264 "uuid": "71ec3629-c1bd-4cf2-bad5-679081d3dd30", 00:16:24.264 "strip_size_kb": 64, 00:16:24.264 "state": "online", 00:16:24.264 "raid_level": "raid5f", 00:16:24.264 "superblock": true, 00:16:24.264 "num_base_bdevs": 4, 00:16:24.264 "num_base_bdevs_discovered": 3, 00:16:24.264 "num_base_bdevs_operational": 3, 00:16:24.264 "base_bdevs_list": [ 00:16:24.264 { 00:16:24.264 "name": null, 00:16:24.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.264 "is_configured": false, 00:16:24.264 "data_offset": 0, 00:16:24.264 "data_size": 63488 00:16:24.264 }, 00:16:24.264 { 00:16:24.264 "name": "pt2", 00:16:24.264 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:24.264 "is_configured": true, 00:16:24.264 "data_offset": 2048, 00:16:24.264 "data_size": 63488 00:16:24.264 }, 00:16:24.264 { 00:16:24.264 "name": "pt3", 00:16:24.264 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:24.264 "is_configured": true, 00:16:24.264 "data_offset": 2048, 00:16:24.264 "data_size": 63488 00:16:24.264 }, 00:16:24.264 { 00:16:24.264 "name": "pt4", 00:16:24.264 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:24.264 "is_configured": true, 00:16:24.264 "data_offset": 2048, 00:16:24.264 "data_size": 63488 00:16:24.264 } 00:16:24.264 ] 00:16:24.264 }' 00:16:24.264 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.264 14:49:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.525 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:24.525 14:49:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.525 14:49:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.525 [2024-12-09 14:49:02.523894] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:24.525 [2024-12-09 14:49:02.523987] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:24.525 [2024-12-09 14:49:02.524110] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:24.525 [2024-12-09 14:49:02.524210] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:24.525 [2024-12-09 14:49:02.524258] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:24.525 14:49:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.525 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.525 14:49:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.525 14:49:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.525 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:24.525 14:49:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.525 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:24.525 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:24.525 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:24.525 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:24.525 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:24.525 14:49:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.525 14:49:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.525 14:49:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.525 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:24.525 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:24.525 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:24.525 14:49:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.525 14:49:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.525 14:49:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.525 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:24.525 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:24.525 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:16:24.525 14:49:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.525 14:49:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.525 14:49:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.525 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:24.525 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:24.525 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:24.525 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:24.525 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:24.525 14:49:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.525 14:49:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.525 [2024-12-09 14:49:02.619705] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:24.526 [2024-12-09 14:49:02.619800] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.526 [2024-12-09 14:49:02.619823] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:24.526 [2024-12-09 14:49:02.619832] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.526 [2024-12-09 14:49:02.622126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.526 [2024-12-09 14:49:02.622164] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:24.526 [2024-12-09 14:49:02.622246] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:24.526 [2024-12-09 14:49:02.622292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:24.526 pt2 00:16:24.526 14:49:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.526 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:24.526 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:24.526 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:24.526 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:24.526 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.526 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:24.526 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.526 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.526 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.526 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.526 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.526 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.526 14:49:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.526 14:49:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.786 14:49:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.786 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.786 "name": "raid_bdev1", 00:16:24.786 "uuid": "71ec3629-c1bd-4cf2-bad5-679081d3dd30", 00:16:24.786 "strip_size_kb": 64, 00:16:24.786 "state": "configuring", 00:16:24.786 "raid_level": "raid5f", 00:16:24.786 "superblock": true, 00:16:24.786 "num_base_bdevs": 4, 00:16:24.786 "num_base_bdevs_discovered": 1, 00:16:24.786 "num_base_bdevs_operational": 3, 00:16:24.786 "base_bdevs_list": [ 00:16:24.786 { 00:16:24.786 "name": null, 00:16:24.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.786 "is_configured": false, 00:16:24.786 "data_offset": 2048, 00:16:24.786 "data_size": 63488 00:16:24.786 }, 00:16:24.786 { 00:16:24.786 "name": "pt2", 00:16:24.786 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:24.786 "is_configured": true, 00:16:24.786 "data_offset": 2048, 00:16:24.786 "data_size": 63488 00:16:24.786 }, 00:16:24.786 { 00:16:24.786 "name": null, 00:16:24.786 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:24.786 "is_configured": false, 00:16:24.786 "data_offset": 2048, 00:16:24.786 "data_size": 63488 00:16:24.786 }, 00:16:24.786 { 00:16:24.786 "name": null, 00:16:24.786 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:24.786 "is_configured": false, 00:16:24.786 "data_offset": 2048, 00:16:24.786 "data_size": 63488 00:16:24.786 } 00:16:24.786 ] 00:16:24.786 }' 00:16:24.786 14:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.786 14:49:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.046 14:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:25.046 14:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:25.046 14:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:25.046 14:49:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.046 14:49:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.046 [2024-12-09 14:49:03.055038] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:25.046 [2024-12-09 14:49:03.055178] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.046 [2024-12-09 14:49:03.055228] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:25.046 [2024-12-09 14:49:03.055281] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.046 [2024-12-09 14:49:03.055765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.046 [2024-12-09 14:49:03.055828] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:25.046 [2024-12-09 14:49:03.055949] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:25.046 [2024-12-09 14:49:03.056000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:25.046 pt3 00:16:25.046 14:49:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.046 14:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:25.046 14:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.046 14:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:25.046 14:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.046 14:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.046 14:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:25.046 14:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.046 14:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.046 14:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.046 14:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.046 14:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.046 14:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.046 14:49:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.046 14:49:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.046 14:49:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.046 14:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.046 "name": "raid_bdev1", 00:16:25.046 "uuid": "71ec3629-c1bd-4cf2-bad5-679081d3dd30", 00:16:25.046 "strip_size_kb": 64, 00:16:25.046 "state": "configuring", 00:16:25.046 "raid_level": "raid5f", 00:16:25.046 "superblock": true, 00:16:25.046 "num_base_bdevs": 4, 00:16:25.046 "num_base_bdevs_discovered": 2, 00:16:25.046 "num_base_bdevs_operational": 3, 00:16:25.046 "base_bdevs_list": [ 00:16:25.046 { 00:16:25.046 "name": null, 00:16:25.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.046 "is_configured": false, 00:16:25.046 "data_offset": 2048, 00:16:25.046 "data_size": 63488 00:16:25.046 }, 00:16:25.046 { 00:16:25.046 "name": "pt2", 00:16:25.046 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:25.046 "is_configured": true, 00:16:25.046 "data_offset": 2048, 00:16:25.046 "data_size": 63488 00:16:25.046 }, 00:16:25.046 { 00:16:25.046 "name": "pt3", 00:16:25.046 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:25.046 "is_configured": true, 00:16:25.046 "data_offset": 2048, 00:16:25.046 "data_size": 63488 00:16:25.046 }, 00:16:25.046 { 00:16:25.046 "name": null, 00:16:25.046 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:25.046 "is_configured": false, 00:16:25.046 "data_offset": 2048, 00:16:25.046 "data_size": 63488 00:16:25.046 } 00:16:25.046 ] 00:16:25.046 }' 00:16:25.046 14:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.046 14:49:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.616 14:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:25.616 14:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:25.616 14:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:16:25.616 14:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:25.616 14:49:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.616 14:49:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.616 [2024-12-09 14:49:03.542220] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:25.616 [2024-12-09 14:49:03.542286] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.616 [2024-12-09 14:49:03.542310] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:25.616 [2024-12-09 14:49:03.542319] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.616 [2024-12-09 14:49:03.542766] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.616 [2024-12-09 14:49:03.542784] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:25.616 [2024-12-09 14:49:03.542863] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:25.616 [2024-12-09 14:49:03.542889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:25.616 [2024-12-09 14:49:03.543016] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:25.616 [2024-12-09 14:49:03.543025] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:25.617 [2024-12-09 14:49:03.543287] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:25.617 [2024-12-09 14:49:03.550314] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:25.617 [2024-12-09 14:49:03.550338] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:25.617 [2024-12-09 14:49:03.550654] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.617 pt4 00:16:25.617 14:49:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.617 14:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:25.617 14:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.617 14:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.617 14:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.617 14:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.617 14:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:25.617 14:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.617 14:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.617 14:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.617 14:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.617 14:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.617 14:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.617 14:49:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.617 14:49:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.617 14:49:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.617 14:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.617 "name": "raid_bdev1", 00:16:25.617 "uuid": "71ec3629-c1bd-4cf2-bad5-679081d3dd30", 00:16:25.617 "strip_size_kb": 64, 00:16:25.617 "state": "online", 00:16:25.617 "raid_level": "raid5f", 00:16:25.617 "superblock": true, 00:16:25.617 "num_base_bdevs": 4, 00:16:25.617 "num_base_bdevs_discovered": 3, 00:16:25.617 "num_base_bdevs_operational": 3, 00:16:25.617 "base_bdevs_list": [ 00:16:25.617 { 00:16:25.617 "name": null, 00:16:25.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.617 "is_configured": false, 00:16:25.617 "data_offset": 2048, 00:16:25.617 "data_size": 63488 00:16:25.617 }, 00:16:25.617 { 00:16:25.617 "name": "pt2", 00:16:25.617 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:25.617 "is_configured": true, 00:16:25.617 "data_offset": 2048, 00:16:25.617 "data_size": 63488 00:16:25.617 }, 00:16:25.617 { 00:16:25.617 "name": "pt3", 00:16:25.617 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:25.617 "is_configured": true, 00:16:25.617 "data_offset": 2048, 00:16:25.617 "data_size": 63488 00:16:25.617 }, 00:16:25.617 { 00:16:25.617 "name": "pt4", 00:16:25.617 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:25.617 "is_configured": true, 00:16:25.617 "data_offset": 2048, 00:16:25.617 "data_size": 63488 00:16:25.617 } 00:16:25.617 ] 00:16:25.617 }' 00:16:25.617 14:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.617 14:49:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.877 14:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:25.877 14:49:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.877 14:49:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.877 [2024-12-09 14:49:03.998765] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:25.877 [2024-12-09 14:49:03.998862] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:25.877 [2024-12-09 14:49:03.998969] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:26.137 [2024-12-09 14:49:03.999073] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:26.137 [2024-12-09 14:49:03.999129] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:26.137 14:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.137 14:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.137 14:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.137 14:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.137 14:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:26.137 14:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.137 14:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:26.137 14:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:26.137 14:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:16:26.137 14:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:16:26.137 14:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:16:26.137 14:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.137 14:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.137 14:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.137 14:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:26.137 14:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.137 14:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.137 [2024-12-09 14:49:04.074636] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:26.137 [2024-12-09 14:49:04.074752] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.137 [2024-12-09 14:49:04.074799] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:16:26.137 [2024-12-09 14:49:04.074861] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.137 [2024-12-09 14:49:04.077326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.137 [2024-12-09 14:49:04.077408] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:26.137 [2024-12-09 14:49:04.077523] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:26.137 [2024-12-09 14:49:04.077640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:26.137 [2024-12-09 14:49:04.077816] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:26.137 [2024-12-09 14:49:04.077879] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:26.137 [2024-12-09 14:49:04.077930] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:26.137 [2024-12-09 14:49:04.078043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:26.138 [2024-12-09 14:49:04.078203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:26.138 pt1 00:16:26.138 14:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.138 14:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:16:26.138 14:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:26.138 14:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:26.138 14:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:26.138 14:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:26.138 14:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.138 14:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:26.138 14:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.138 14:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.138 14:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.138 14:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.138 14:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.138 14:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.138 14:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.138 14:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.138 14:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.138 14:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.138 "name": "raid_bdev1", 00:16:26.138 "uuid": "71ec3629-c1bd-4cf2-bad5-679081d3dd30", 00:16:26.138 "strip_size_kb": 64, 00:16:26.138 "state": "configuring", 00:16:26.138 "raid_level": "raid5f", 00:16:26.138 "superblock": true, 00:16:26.138 "num_base_bdevs": 4, 00:16:26.138 "num_base_bdevs_discovered": 2, 00:16:26.138 "num_base_bdevs_operational": 3, 00:16:26.138 "base_bdevs_list": [ 00:16:26.138 { 00:16:26.138 "name": null, 00:16:26.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.138 "is_configured": false, 00:16:26.138 "data_offset": 2048, 00:16:26.138 "data_size": 63488 00:16:26.138 }, 00:16:26.138 { 00:16:26.138 "name": "pt2", 00:16:26.138 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:26.138 "is_configured": true, 00:16:26.138 "data_offset": 2048, 00:16:26.138 "data_size": 63488 00:16:26.138 }, 00:16:26.138 { 00:16:26.138 "name": "pt3", 00:16:26.138 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:26.138 "is_configured": true, 00:16:26.138 "data_offset": 2048, 00:16:26.138 "data_size": 63488 00:16:26.138 }, 00:16:26.138 { 00:16:26.138 "name": null, 00:16:26.138 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:26.138 "is_configured": false, 00:16:26.138 "data_offset": 2048, 00:16:26.138 "data_size": 63488 00:16:26.138 } 00:16:26.138 ] 00:16:26.138 }' 00:16:26.138 14:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.138 14:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.708 14:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:26.708 14:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:26.708 14:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.708 14:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.708 14:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.708 14:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:26.708 14:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:26.708 14:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.708 14:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.708 [2024-12-09 14:49:04.561818] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:26.708 [2024-12-09 14:49:04.561940] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.708 [2024-12-09 14:49:04.561997] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:26.708 [2024-12-09 14:49:04.562012] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.708 [2024-12-09 14:49:04.562539] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.708 [2024-12-09 14:49:04.562585] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:26.708 [2024-12-09 14:49:04.562685] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:26.708 [2024-12-09 14:49:04.562711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:26.708 [2024-12-09 14:49:04.562873] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:26.708 [2024-12-09 14:49:04.562890] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:26.708 [2024-12-09 14:49:04.563180] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:26.708 [2024-12-09 14:49:04.571148] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:26.708 [2024-12-09 14:49:04.571185] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:26.708 [2024-12-09 14:49:04.571548] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.708 pt4 00:16:26.708 14:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.708 14:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:26.708 14:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:26.708 14:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.708 14:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:26.708 14:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.708 14:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:26.708 14:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.708 14:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.708 14:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.708 14:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.708 14:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.708 14:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.708 14:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.708 14:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.708 14:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.708 14:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.708 "name": "raid_bdev1", 00:16:26.708 "uuid": "71ec3629-c1bd-4cf2-bad5-679081d3dd30", 00:16:26.708 "strip_size_kb": 64, 00:16:26.708 "state": "online", 00:16:26.708 "raid_level": "raid5f", 00:16:26.708 "superblock": true, 00:16:26.708 "num_base_bdevs": 4, 00:16:26.708 "num_base_bdevs_discovered": 3, 00:16:26.708 "num_base_bdevs_operational": 3, 00:16:26.708 "base_bdevs_list": [ 00:16:26.708 { 00:16:26.708 "name": null, 00:16:26.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.708 "is_configured": false, 00:16:26.708 "data_offset": 2048, 00:16:26.708 "data_size": 63488 00:16:26.708 }, 00:16:26.708 { 00:16:26.708 "name": "pt2", 00:16:26.708 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:26.708 "is_configured": true, 00:16:26.708 "data_offset": 2048, 00:16:26.708 "data_size": 63488 00:16:26.708 }, 00:16:26.708 { 00:16:26.708 "name": "pt3", 00:16:26.708 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:26.708 "is_configured": true, 00:16:26.708 "data_offset": 2048, 00:16:26.708 "data_size": 63488 00:16:26.708 }, 00:16:26.708 { 00:16:26.708 "name": "pt4", 00:16:26.708 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:26.708 "is_configured": true, 00:16:26.708 "data_offset": 2048, 00:16:26.708 "data_size": 63488 00:16:26.708 } 00:16:26.708 ] 00:16:26.708 }' 00:16:26.708 14:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.708 14:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.968 14:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:26.968 14:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:26.968 14:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.968 14:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.968 14:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.228 14:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:27.228 14:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:27.228 14:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:27.228 14:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.228 14:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.228 [2024-12-09 14:49:05.116116] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:27.228 14:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.228 14:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 71ec3629-c1bd-4cf2-bad5-679081d3dd30 '!=' 71ec3629-c1bd-4cf2-bad5-679081d3dd30 ']' 00:16:27.228 14:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 85426 00:16:27.228 14:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 85426 ']' 00:16:27.228 14:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 85426 00:16:27.228 14:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:16:27.228 14:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:27.228 14:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85426 00:16:27.228 14:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:27.228 killing process with pid 85426 00:16:27.228 14:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:27.228 14:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85426' 00:16:27.228 14:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 85426 00:16:27.228 [2024-12-09 14:49:05.197295] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:27.228 [2024-12-09 14:49:05.197386] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:27.228 [2024-12-09 14:49:05.197468] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:27.228 14:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 85426 00:16:27.228 [2024-12-09 14:49:05.197484] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:27.488 [2024-12-09 14:49:05.582026] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:28.869 14:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:28.869 00:16:28.869 real 0m8.649s 00:16:28.869 user 0m13.621s 00:16:28.869 sys 0m1.625s 00:16:28.869 ************************************ 00:16:28.869 END TEST raid5f_superblock_test 00:16:28.869 ************************************ 00:16:28.869 14:49:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:28.869 14:49:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.869 14:49:06 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:28.869 14:49:06 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:16:28.869 14:49:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:28.869 14:49:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:28.869 14:49:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:28.869 ************************************ 00:16:28.869 START TEST raid5f_rebuild_test 00:16:28.869 ************************************ 00:16:28.869 14:49:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:16:28.869 14:49:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:28.869 14:49:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:28.869 14:49:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:28.869 14:49:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:28.869 14:49:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:28.869 14:49:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:28.869 14:49:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:28.869 14:49:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:28.869 14:49:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:28.869 14:49:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:28.869 14:49:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:28.869 14:49:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:28.869 14:49:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:28.869 14:49:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:28.869 14:49:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:28.869 14:49:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:28.869 14:49:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:28.869 14:49:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:28.869 14:49:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:28.869 14:49:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:28.869 14:49:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:28.869 14:49:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:28.869 14:49:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:28.869 14:49:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:28.869 14:49:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:28.869 14:49:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:28.869 14:49:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:28.869 14:49:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:28.869 14:49:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:28.870 14:49:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:28.870 14:49:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:28.870 14:49:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85910 00:16:28.870 14:49:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:28.870 14:49:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85910 00:16:28.870 14:49:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 85910 ']' 00:16:28.870 14:49:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.870 14:49:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:28.870 14:49:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.870 14:49:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:28.870 14:49:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.870 [2024-12-09 14:49:06.863294] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:16:28.870 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:28.870 Zero copy mechanism will not be used. 00:16:28.870 [2024-12-09 14:49:06.863477] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85910 ] 00:16:29.130 [2024-12-09 14:49:07.039429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.130 [2024-12-09 14:49:07.155282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.390 [2024-12-09 14:49:07.355109] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:29.390 [2024-12-09 14:49:07.355179] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:29.650 14:49:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:29.650 14:49:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:16:29.650 14:49:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:29.650 14:49:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:29.650 14:49:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.650 14:49:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.650 BaseBdev1_malloc 00:16:29.650 14:49:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.650 14:49:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:29.650 14:49:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.650 14:49:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.650 [2024-12-09 14:49:07.746743] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:29.650 [2024-12-09 14:49:07.746861] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.650 [2024-12-09 14:49:07.746903] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:29.650 [2024-12-09 14:49:07.746936] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.650 [2024-12-09 14:49:07.749266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.650 [2024-12-09 14:49:07.749343] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:29.650 BaseBdev1 00:16:29.650 14:49:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.650 14:49:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:29.650 14:49:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:29.650 14:49:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.650 14:49:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.910 BaseBdev2_malloc 00:16:29.910 14:49:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.911 14:49:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:29.911 14:49:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.911 14:49:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.911 [2024-12-09 14:49:07.802416] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:29.911 [2024-12-09 14:49:07.802547] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.911 [2024-12-09 14:49:07.802597] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:29.911 [2024-12-09 14:49:07.802631] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.911 [2024-12-09 14:49:07.804747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.911 [2024-12-09 14:49:07.804820] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:29.911 BaseBdev2 00:16:29.911 14:49:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.911 14:49:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:29.911 14:49:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:29.911 14:49:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.911 14:49:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.911 BaseBdev3_malloc 00:16:29.911 14:49:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.911 14:49:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:29.911 14:49:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.911 14:49:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.911 [2024-12-09 14:49:07.888373] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:29.911 [2024-12-09 14:49:07.888429] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.911 [2024-12-09 14:49:07.888451] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:29.911 [2024-12-09 14:49:07.888462] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.911 [2024-12-09 14:49:07.890728] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.911 [2024-12-09 14:49:07.890768] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:29.911 BaseBdev3 00:16:29.911 14:49:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.911 14:49:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:29.911 14:49:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:29.911 14:49:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.911 14:49:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.911 BaseBdev4_malloc 00:16:29.911 14:49:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.911 14:49:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:29.911 14:49:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.911 14:49:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.911 [2024-12-09 14:49:07.943260] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:29.911 [2024-12-09 14:49:07.943331] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.911 [2024-12-09 14:49:07.943357] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:29.911 [2024-12-09 14:49:07.943368] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.911 [2024-12-09 14:49:07.945467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.911 [2024-12-09 14:49:07.945560] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:29.911 BaseBdev4 00:16:29.911 14:49:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.911 14:49:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:29.911 14:49:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.911 14:49:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.911 spare_malloc 00:16:29.911 14:49:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.911 14:49:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:29.911 14:49:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.911 14:49:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.911 spare_delay 00:16:29.911 14:49:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.911 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:29.911 14:49:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.911 14:49:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.911 [2024-12-09 14:49:08.012039] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:29.911 [2024-12-09 14:49:08.012150] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.911 [2024-12-09 14:49:08.012186] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:29.911 [2024-12-09 14:49:08.012217] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.911 [2024-12-09 14:49:08.014298] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.911 [2024-12-09 14:49:08.014388] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:29.911 spare 00:16:29.911 14:49:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.911 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:29.911 14:49:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.911 14:49:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.911 [2024-12-09 14:49:08.024069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:29.911 [2024-12-09 14:49:08.025884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:29.911 [2024-12-09 14:49:08.025982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:29.911 [2024-12-09 14:49:08.026070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:29.911 [2024-12-09 14:49:08.026190] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:29.911 [2024-12-09 14:49:08.026231] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:29.911 [2024-12-09 14:49:08.026494] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:30.243 [2024-12-09 14:49:08.034383] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:30.243 [2024-12-09 14:49:08.034450] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:30.243 [2024-12-09 14:49:08.034724] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:30.243 14:49:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.243 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:30.243 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:30.243 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:30.243 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.243 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.243 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:30.243 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.243 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.243 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.243 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.243 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.243 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.243 14:49:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.243 14:49:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.243 14:49:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.243 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.243 "name": "raid_bdev1", 00:16:30.243 "uuid": "0e4fd8f0-8f9d-408f-95a1-9c719ce3e46f", 00:16:30.243 "strip_size_kb": 64, 00:16:30.243 "state": "online", 00:16:30.243 "raid_level": "raid5f", 00:16:30.243 "superblock": false, 00:16:30.243 "num_base_bdevs": 4, 00:16:30.243 "num_base_bdevs_discovered": 4, 00:16:30.243 "num_base_bdevs_operational": 4, 00:16:30.243 "base_bdevs_list": [ 00:16:30.243 { 00:16:30.243 "name": "BaseBdev1", 00:16:30.243 "uuid": "5461a4db-d606-50be-8152-f7b16e6a9f46", 00:16:30.243 "is_configured": true, 00:16:30.243 "data_offset": 0, 00:16:30.243 "data_size": 65536 00:16:30.243 }, 00:16:30.243 { 00:16:30.243 "name": "BaseBdev2", 00:16:30.243 "uuid": "36f43e2e-1b3f-57ee-9378-a72bdac752bb", 00:16:30.243 "is_configured": true, 00:16:30.243 "data_offset": 0, 00:16:30.243 "data_size": 65536 00:16:30.243 }, 00:16:30.243 { 00:16:30.243 "name": "BaseBdev3", 00:16:30.243 "uuid": "23a26a2d-6b5e-5a96-8e9a-6eb8dd00f2e2", 00:16:30.243 "is_configured": true, 00:16:30.243 "data_offset": 0, 00:16:30.243 "data_size": 65536 00:16:30.243 }, 00:16:30.243 { 00:16:30.243 "name": "BaseBdev4", 00:16:30.243 "uuid": "a10e94b6-272a-5bb0-bf07-b27322f8ddf1", 00:16:30.243 "is_configured": true, 00:16:30.243 "data_offset": 0, 00:16:30.243 "data_size": 65536 00:16:30.243 } 00:16:30.243 ] 00:16:30.243 }' 00:16:30.243 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.243 14:49:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.502 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:30.503 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:30.503 14:49:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.503 14:49:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.503 [2024-12-09 14:49:08.471085] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:30.503 14:49:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.503 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:16:30.503 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:30.503 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.503 14:49:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.503 14:49:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.503 14:49:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.503 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:30.503 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:30.503 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:30.503 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:30.503 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:30.503 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:30.503 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:30.503 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:30.503 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:30.503 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:30.503 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:30.503 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:30.503 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:30.503 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:30.762 [2024-12-09 14:49:08.718486] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:30.762 /dev/nbd0 00:16:30.762 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:30.762 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:30.762 14:49:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:30.762 14:49:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:30.762 14:49:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:30.762 14:49:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:30.762 14:49:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:30.762 14:49:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:30.762 14:49:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:30.762 14:49:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:30.762 14:49:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:30.762 1+0 records in 00:16:30.762 1+0 records out 00:16:30.762 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000622861 s, 6.6 MB/s 00:16:30.762 14:49:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:30.762 14:49:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:30.762 14:49:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:30.762 14:49:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:30.762 14:49:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:30.762 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:30.762 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:30.762 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:30.762 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:30.762 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:30.762 14:49:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:16:31.331 512+0 records in 00:16:31.331 512+0 records out 00:16:31.331 100663296 bytes (101 MB, 96 MiB) copied, 0.483508 s, 208 MB/s 00:16:31.331 14:49:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:31.331 14:49:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:31.331 14:49:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:31.331 14:49:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:31.331 14:49:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:31.331 14:49:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:31.331 14:49:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:31.592 14:49:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:31.592 [2024-12-09 14:49:09.512792] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.592 14:49:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:31.592 14:49:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:31.592 14:49:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:31.592 14:49:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:31.592 14:49:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:31.592 14:49:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:31.592 14:49:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:31.592 14:49:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:31.592 14:49:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.592 14:49:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.592 [2024-12-09 14:49:09.531318] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:31.592 14:49:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.592 14:49:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:31.592 14:49:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.592 14:49:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.592 14:49:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.592 14:49:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.592 14:49:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:31.592 14:49:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.592 14:49:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.592 14:49:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.592 14:49:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.592 14:49:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.592 14:49:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.592 14:49:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.592 14:49:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.592 14:49:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.592 14:49:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.592 "name": "raid_bdev1", 00:16:31.592 "uuid": "0e4fd8f0-8f9d-408f-95a1-9c719ce3e46f", 00:16:31.592 "strip_size_kb": 64, 00:16:31.592 "state": "online", 00:16:31.592 "raid_level": "raid5f", 00:16:31.592 "superblock": false, 00:16:31.592 "num_base_bdevs": 4, 00:16:31.592 "num_base_bdevs_discovered": 3, 00:16:31.592 "num_base_bdevs_operational": 3, 00:16:31.592 "base_bdevs_list": [ 00:16:31.592 { 00:16:31.592 "name": null, 00:16:31.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.592 "is_configured": false, 00:16:31.592 "data_offset": 0, 00:16:31.592 "data_size": 65536 00:16:31.592 }, 00:16:31.592 { 00:16:31.592 "name": "BaseBdev2", 00:16:31.592 "uuid": "36f43e2e-1b3f-57ee-9378-a72bdac752bb", 00:16:31.592 "is_configured": true, 00:16:31.592 "data_offset": 0, 00:16:31.592 "data_size": 65536 00:16:31.592 }, 00:16:31.592 { 00:16:31.592 "name": "BaseBdev3", 00:16:31.592 "uuid": "23a26a2d-6b5e-5a96-8e9a-6eb8dd00f2e2", 00:16:31.592 "is_configured": true, 00:16:31.592 "data_offset": 0, 00:16:31.592 "data_size": 65536 00:16:31.592 }, 00:16:31.592 { 00:16:31.592 "name": "BaseBdev4", 00:16:31.592 "uuid": "a10e94b6-272a-5bb0-bf07-b27322f8ddf1", 00:16:31.592 "is_configured": true, 00:16:31.592 "data_offset": 0, 00:16:31.592 "data_size": 65536 00:16:31.592 } 00:16:31.592 ] 00:16:31.592 }' 00:16:31.592 14:49:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.592 14:49:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.162 14:49:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:32.162 14:49:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.162 14:49:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.162 [2024-12-09 14:49:10.018494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:32.162 [2024-12-09 14:49:10.035439] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:32.162 14:49:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.162 14:49:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:32.162 [2024-12-09 14:49:10.045271] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:33.102 14:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:33.102 14:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.102 14:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:33.102 14:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:33.102 14:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.102 14:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.102 14:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.102 14:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.102 14:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.102 14:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.102 14:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.102 "name": "raid_bdev1", 00:16:33.102 "uuid": "0e4fd8f0-8f9d-408f-95a1-9c719ce3e46f", 00:16:33.102 "strip_size_kb": 64, 00:16:33.102 "state": "online", 00:16:33.102 "raid_level": "raid5f", 00:16:33.102 "superblock": false, 00:16:33.102 "num_base_bdevs": 4, 00:16:33.102 "num_base_bdevs_discovered": 4, 00:16:33.102 "num_base_bdevs_operational": 4, 00:16:33.102 "process": { 00:16:33.102 "type": "rebuild", 00:16:33.102 "target": "spare", 00:16:33.102 "progress": { 00:16:33.102 "blocks": 19200, 00:16:33.102 "percent": 9 00:16:33.102 } 00:16:33.102 }, 00:16:33.102 "base_bdevs_list": [ 00:16:33.102 { 00:16:33.102 "name": "spare", 00:16:33.102 "uuid": "555d02b8-8a27-5479-b520-26e410afb000", 00:16:33.103 "is_configured": true, 00:16:33.103 "data_offset": 0, 00:16:33.103 "data_size": 65536 00:16:33.103 }, 00:16:33.103 { 00:16:33.103 "name": "BaseBdev2", 00:16:33.103 "uuid": "36f43e2e-1b3f-57ee-9378-a72bdac752bb", 00:16:33.103 "is_configured": true, 00:16:33.103 "data_offset": 0, 00:16:33.103 "data_size": 65536 00:16:33.103 }, 00:16:33.103 { 00:16:33.103 "name": "BaseBdev3", 00:16:33.103 "uuid": "23a26a2d-6b5e-5a96-8e9a-6eb8dd00f2e2", 00:16:33.103 "is_configured": true, 00:16:33.103 "data_offset": 0, 00:16:33.103 "data_size": 65536 00:16:33.103 }, 00:16:33.103 { 00:16:33.103 "name": "BaseBdev4", 00:16:33.103 "uuid": "a10e94b6-272a-5bb0-bf07-b27322f8ddf1", 00:16:33.103 "is_configured": true, 00:16:33.103 "data_offset": 0, 00:16:33.103 "data_size": 65536 00:16:33.103 } 00:16:33.103 ] 00:16:33.103 }' 00:16:33.103 14:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.103 14:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:33.103 14:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.103 14:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:33.103 14:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:33.103 14:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.103 14:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.103 [2024-12-09 14:49:11.176710] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:33.362 [2024-12-09 14:49:11.253591] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:33.362 [2024-12-09 14:49:11.253766] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.362 [2024-12-09 14:49:11.253808] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:33.362 [2024-12-09 14:49:11.253834] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:33.362 14:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.362 14:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:33.362 14:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:33.362 14:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.362 14:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.362 14:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.362 14:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:33.362 14:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.362 14:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.362 14:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.362 14:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.362 14:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.362 14:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.362 14:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.362 14:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.362 14:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.363 14:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.363 "name": "raid_bdev1", 00:16:33.363 "uuid": "0e4fd8f0-8f9d-408f-95a1-9c719ce3e46f", 00:16:33.363 "strip_size_kb": 64, 00:16:33.363 "state": "online", 00:16:33.363 "raid_level": "raid5f", 00:16:33.363 "superblock": false, 00:16:33.363 "num_base_bdevs": 4, 00:16:33.363 "num_base_bdevs_discovered": 3, 00:16:33.363 "num_base_bdevs_operational": 3, 00:16:33.363 "base_bdevs_list": [ 00:16:33.363 { 00:16:33.363 "name": null, 00:16:33.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.363 "is_configured": false, 00:16:33.363 "data_offset": 0, 00:16:33.363 "data_size": 65536 00:16:33.363 }, 00:16:33.363 { 00:16:33.363 "name": "BaseBdev2", 00:16:33.363 "uuid": "36f43e2e-1b3f-57ee-9378-a72bdac752bb", 00:16:33.363 "is_configured": true, 00:16:33.363 "data_offset": 0, 00:16:33.363 "data_size": 65536 00:16:33.363 }, 00:16:33.363 { 00:16:33.363 "name": "BaseBdev3", 00:16:33.363 "uuid": "23a26a2d-6b5e-5a96-8e9a-6eb8dd00f2e2", 00:16:33.363 "is_configured": true, 00:16:33.363 "data_offset": 0, 00:16:33.363 "data_size": 65536 00:16:33.363 }, 00:16:33.363 { 00:16:33.363 "name": "BaseBdev4", 00:16:33.363 "uuid": "a10e94b6-272a-5bb0-bf07-b27322f8ddf1", 00:16:33.363 "is_configured": true, 00:16:33.363 "data_offset": 0, 00:16:33.363 "data_size": 65536 00:16:33.363 } 00:16:33.363 ] 00:16:33.363 }' 00:16:33.363 14:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.363 14:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.622 14:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:33.622 14:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.622 14:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:33.622 14:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:33.622 14:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.622 14:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.622 14:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.622 14:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.622 14:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.881 14:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.881 14:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.881 "name": "raid_bdev1", 00:16:33.881 "uuid": "0e4fd8f0-8f9d-408f-95a1-9c719ce3e46f", 00:16:33.881 "strip_size_kb": 64, 00:16:33.881 "state": "online", 00:16:33.881 "raid_level": "raid5f", 00:16:33.881 "superblock": false, 00:16:33.881 "num_base_bdevs": 4, 00:16:33.881 "num_base_bdevs_discovered": 3, 00:16:33.881 "num_base_bdevs_operational": 3, 00:16:33.881 "base_bdevs_list": [ 00:16:33.881 { 00:16:33.881 "name": null, 00:16:33.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.881 "is_configured": false, 00:16:33.881 "data_offset": 0, 00:16:33.881 "data_size": 65536 00:16:33.881 }, 00:16:33.881 { 00:16:33.881 "name": "BaseBdev2", 00:16:33.882 "uuid": "36f43e2e-1b3f-57ee-9378-a72bdac752bb", 00:16:33.882 "is_configured": true, 00:16:33.882 "data_offset": 0, 00:16:33.882 "data_size": 65536 00:16:33.882 }, 00:16:33.882 { 00:16:33.882 "name": "BaseBdev3", 00:16:33.882 "uuid": "23a26a2d-6b5e-5a96-8e9a-6eb8dd00f2e2", 00:16:33.882 "is_configured": true, 00:16:33.882 "data_offset": 0, 00:16:33.882 "data_size": 65536 00:16:33.882 }, 00:16:33.882 { 00:16:33.882 "name": "BaseBdev4", 00:16:33.882 "uuid": "a10e94b6-272a-5bb0-bf07-b27322f8ddf1", 00:16:33.882 "is_configured": true, 00:16:33.882 "data_offset": 0, 00:16:33.882 "data_size": 65536 00:16:33.882 } 00:16:33.882 ] 00:16:33.882 }' 00:16:33.882 14:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.882 14:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:33.882 14:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.882 14:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:33.882 14:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:33.882 14:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.882 14:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.882 [2024-12-09 14:49:11.889885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:33.882 [2024-12-09 14:49:11.904888] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:16:33.882 14:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.882 14:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:33.882 [2024-12-09 14:49:11.914456] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:34.821 14:49:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:34.821 14:49:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:34.821 14:49:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:34.821 14:49:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:34.821 14:49:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:34.821 14:49:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.821 14:49:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.821 14:49:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.821 14:49:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.821 14:49:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.080 14:49:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.080 "name": "raid_bdev1", 00:16:35.080 "uuid": "0e4fd8f0-8f9d-408f-95a1-9c719ce3e46f", 00:16:35.080 "strip_size_kb": 64, 00:16:35.080 "state": "online", 00:16:35.080 "raid_level": "raid5f", 00:16:35.080 "superblock": false, 00:16:35.080 "num_base_bdevs": 4, 00:16:35.080 "num_base_bdevs_discovered": 4, 00:16:35.080 "num_base_bdevs_operational": 4, 00:16:35.080 "process": { 00:16:35.080 "type": "rebuild", 00:16:35.080 "target": "spare", 00:16:35.080 "progress": { 00:16:35.080 "blocks": 19200, 00:16:35.080 "percent": 9 00:16:35.080 } 00:16:35.080 }, 00:16:35.080 "base_bdevs_list": [ 00:16:35.080 { 00:16:35.080 "name": "spare", 00:16:35.080 "uuid": "555d02b8-8a27-5479-b520-26e410afb000", 00:16:35.080 "is_configured": true, 00:16:35.080 "data_offset": 0, 00:16:35.080 "data_size": 65536 00:16:35.080 }, 00:16:35.080 { 00:16:35.080 "name": "BaseBdev2", 00:16:35.080 "uuid": "36f43e2e-1b3f-57ee-9378-a72bdac752bb", 00:16:35.080 "is_configured": true, 00:16:35.080 "data_offset": 0, 00:16:35.080 "data_size": 65536 00:16:35.080 }, 00:16:35.080 { 00:16:35.080 "name": "BaseBdev3", 00:16:35.080 "uuid": "23a26a2d-6b5e-5a96-8e9a-6eb8dd00f2e2", 00:16:35.080 "is_configured": true, 00:16:35.080 "data_offset": 0, 00:16:35.080 "data_size": 65536 00:16:35.080 }, 00:16:35.080 { 00:16:35.080 "name": "BaseBdev4", 00:16:35.080 "uuid": "a10e94b6-272a-5bb0-bf07-b27322f8ddf1", 00:16:35.080 "is_configured": true, 00:16:35.080 "data_offset": 0, 00:16:35.080 "data_size": 65536 00:16:35.080 } 00:16:35.080 ] 00:16:35.080 }' 00:16:35.080 14:49:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.081 14:49:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:35.081 14:49:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.081 14:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:35.081 14:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:35.081 14:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:35.081 14:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:35.081 14:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=625 00:16:35.081 14:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:35.081 14:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:35.081 14:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.081 14:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:35.081 14:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:35.081 14:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.081 14:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.081 14:49:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.081 14:49:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.081 14:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.081 14:49:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.081 14:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.081 "name": "raid_bdev1", 00:16:35.081 "uuid": "0e4fd8f0-8f9d-408f-95a1-9c719ce3e46f", 00:16:35.081 "strip_size_kb": 64, 00:16:35.081 "state": "online", 00:16:35.081 "raid_level": "raid5f", 00:16:35.081 "superblock": false, 00:16:35.081 "num_base_bdevs": 4, 00:16:35.081 "num_base_bdevs_discovered": 4, 00:16:35.081 "num_base_bdevs_operational": 4, 00:16:35.081 "process": { 00:16:35.081 "type": "rebuild", 00:16:35.081 "target": "spare", 00:16:35.081 "progress": { 00:16:35.081 "blocks": 21120, 00:16:35.081 "percent": 10 00:16:35.081 } 00:16:35.081 }, 00:16:35.081 "base_bdevs_list": [ 00:16:35.081 { 00:16:35.081 "name": "spare", 00:16:35.081 "uuid": "555d02b8-8a27-5479-b520-26e410afb000", 00:16:35.081 "is_configured": true, 00:16:35.081 "data_offset": 0, 00:16:35.081 "data_size": 65536 00:16:35.081 }, 00:16:35.081 { 00:16:35.081 "name": "BaseBdev2", 00:16:35.081 "uuid": "36f43e2e-1b3f-57ee-9378-a72bdac752bb", 00:16:35.081 "is_configured": true, 00:16:35.081 "data_offset": 0, 00:16:35.081 "data_size": 65536 00:16:35.081 }, 00:16:35.081 { 00:16:35.081 "name": "BaseBdev3", 00:16:35.081 "uuid": "23a26a2d-6b5e-5a96-8e9a-6eb8dd00f2e2", 00:16:35.081 "is_configured": true, 00:16:35.081 "data_offset": 0, 00:16:35.081 "data_size": 65536 00:16:35.081 }, 00:16:35.081 { 00:16:35.081 "name": "BaseBdev4", 00:16:35.081 "uuid": "a10e94b6-272a-5bb0-bf07-b27322f8ddf1", 00:16:35.081 "is_configured": true, 00:16:35.081 "data_offset": 0, 00:16:35.081 "data_size": 65536 00:16:35.081 } 00:16:35.081 ] 00:16:35.081 }' 00:16:35.081 14:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.081 14:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:35.081 14:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.081 14:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:35.081 14:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:36.461 14:49:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:36.461 14:49:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:36.461 14:49:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.461 14:49:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:36.461 14:49:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:36.461 14:49:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.461 14:49:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.461 14:49:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.461 14:49:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.461 14:49:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.461 14:49:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.461 14:49:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.461 "name": "raid_bdev1", 00:16:36.461 "uuid": "0e4fd8f0-8f9d-408f-95a1-9c719ce3e46f", 00:16:36.461 "strip_size_kb": 64, 00:16:36.461 "state": "online", 00:16:36.461 "raid_level": "raid5f", 00:16:36.461 "superblock": false, 00:16:36.461 "num_base_bdevs": 4, 00:16:36.461 "num_base_bdevs_discovered": 4, 00:16:36.461 "num_base_bdevs_operational": 4, 00:16:36.461 "process": { 00:16:36.461 "type": "rebuild", 00:16:36.461 "target": "spare", 00:16:36.461 "progress": { 00:16:36.461 "blocks": 42240, 00:16:36.461 "percent": 21 00:16:36.461 } 00:16:36.461 }, 00:16:36.461 "base_bdevs_list": [ 00:16:36.461 { 00:16:36.461 "name": "spare", 00:16:36.461 "uuid": "555d02b8-8a27-5479-b520-26e410afb000", 00:16:36.461 "is_configured": true, 00:16:36.461 "data_offset": 0, 00:16:36.461 "data_size": 65536 00:16:36.461 }, 00:16:36.461 { 00:16:36.461 "name": "BaseBdev2", 00:16:36.461 "uuid": "36f43e2e-1b3f-57ee-9378-a72bdac752bb", 00:16:36.461 "is_configured": true, 00:16:36.461 "data_offset": 0, 00:16:36.462 "data_size": 65536 00:16:36.462 }, 00:16:36.462 { 00:16:36.462 "name": "BaseBdev3", 00:16:36.462 "uuid": "23a26a2d-6b5e-5a96-8e9a-6eb8dd00f2e2", 00:16:36.462 "is_configured": true, 00:16:36.462 "data_offset": 0, 00:16:36.462 "data_size": 65536 00:16:36.462 }, 00:16:36.462 { 00:16:36.462 "name": "BaseBdev4", 00:16:36.462 "uuid": "a10e94b6-272a-5bb0-bf07-b27322f8ddf1", 00:16:36.462 "is_configured": true, 00:16:36.462 "data_offset": 0, 00:16:36.462 "data_size": 65536 00:16:36.462 } 00:16:36.462 ] 00:16:36.462 }' 00:16:36.462 14:49:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:36.462 14:49:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:36.462 14:49:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:36.462 14:49:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:36.462 14:49:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:37.402 14:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:37.402 14:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:37.402 14:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.402 14:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:37.402 14:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:37.402 14:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.402 14:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.402 14:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.402 14:49:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.402 14:49:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.402 14:49:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.402 14:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.402 "name": "raid_bdev1", 00:16:37.402 "uuid": "0e4fd8f0-8f9d-408f-95a1-9c719ce3e46f", 00:16:37.402 "strip_size_kb": 64, 00:16:37.402 "state": "online", 00:16:37.402 "raid_level": "raid5f", 00:16:37.402 "superblock": false, 00:16:37.402 "num_base_bdevs": 4, 00:16:37.402 "num_base_bdevs_discovered": 4, 00:16:37.402 "num_base_bdevs_operational": 4, 00:16:37.402 "process": { 00:16:37.402 "type": "rebuild", 00:16:37.402 "target": "spare", 00:16:37.402 "progress": { 00:16:37.402 "blocks": 63360, 00:16:37.402 "percent": 32 00:16:37.402 } 00:16:37.402 }, 00:16:37.402 "base_bdevs_list": [ 00:16:37.402 { 00:16:37.402 "name": "spare", 00:16:37.402 "uuid": "555d02b8-8a27-5479-b520-26e410afb000", 00:16:37.402 "is_configured": true, 00:16:37.402 "data_offset": 0, 00:16:37.402 "data_size": 65536 00:16:37.402 }, 00:16:37.402 { 00:16:37.402 "name": "BaseBdev2", 00:16:37.402 "uuid": "36f43e2e-1b3f-57ee-9378-a72bdac752bb", 00:16:37.402 "is_configured": true, 00:16:37.402 "data_offset": 0, 00:16:37.402 "data_size": 65536 00:16:37.402 }, 00:16:37.402 { 00:16:37.402 "name": "BaseBdev3", 00:16:37.402 "uuid": "23a26a2d-6b5e-5a96-8e9a-6eb8dd00f2e2", 00:16:37.402 "is_configured": true, 00:16:37.402 "data_offset": 0, 00:16:37.402 "data_size": 65536 00:16:37.402 }, 00:16:37.402 { 00:16:37.402 "name": "BaseBdev4", 00:16:37.402 "uuid": "a10e94b6-272a-5bb0-bf07-b27322f8ddf1", 00:16:37.402 "is_configured": true, 00:16:37.402 "data_offset": 0, 00:16:37.402 "data_size": 65536 00:16:37.402 } 00:16:37.402 ] 00:16:37.402 }' 00:16:37.402 14:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.402 14:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:37.402 14:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.402 14:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:37.402 14:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:38.362 14:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:38.362 14:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:38.362 14:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.362 14:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:38.362 14:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:38.362 14:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.362 14:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.362 14:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.362 14:49:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.362 14:49:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.623 14:49:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.623 14:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.623 "name": "raid_bdev1", 00:16:38.623 "uuid": "0e4fd8f0-8f9d-408f-95a1-9c719ce3e46f", 00:16:38.623 "strip_size_kb": 64, 00:16:38.623 "state": "online", 00:16:38.623 "raid_level": "raid5f", 00:16:38.623 "superblock": false, 00:16:38.623 "num_base_bdevs": 4, 00:16:38.623 "num_base_bdevs_discovered": 4, 00:16:38.623 "num_base_bdevs_operational": 4, 00:16:38.623 "process": { 00:16:38.623 "type": "rebuild", 00:16:38.623 "target": "spare", 00:16:38.623 "progress": { 00:16:38.623 "blocks": 86400, 00:16:38.623 "percent": 43 00:16:38.623 } 00:16:38.623 }, 00:16:38.623 "base_bdevs_list": [ 00:16:38.623 { 00:16:38.623 "name": "spare", 00:16:38.623 "uuid": "555d02b8-8a27-5479-b520-26e410afb000", 00:16:38.623 "is_configured": true, 00:16:38.623 "data_offset": 0, 00:16:38.623 "data_size": 65536 00:16:38.623 }, 00:16:38.623 { 00:16:38.623 "name": "BaseBdev2", 00:16:38.623 "uuid": "36f43e2e-1b3f-57ee-9378-a72bdac752bb", 00:16:38.623 "is_configured": true, 00:16:38.623 "data_offset": 0, 00:16:38.623 "data_size": 65536 00:16:38.623 }, 00:16:38.623 { 00:16:38.623 "name": "BaseBdev3", 00:16:38.623 "uuid": "23a26a2d-6b5e-5a96-8e9a-6eb8dd00f2e2", 00:16:38.623 "is_configured": true, 00:16:38.623 "data_offset": 0, 00:16:38.623 "data_size": 65536 00:16:38.623 }, 00:16:38.623 { 00:16:38.623 "name": "BaseBdev4", 00:16:38.623 "uuid": "a10e94b6-272a-5bb0-bf07-b27322f8ddf1", 00:16:38.623 "is_configured": true, 00:16:38.623 "data_offset": 0, 00:16:38.623 "data_size": 65536 00:16:38.623 } 00:16:38.623 ] 00:16:38.623 }' 00:16:38.623 14:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.623 14:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:38.623 14:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.623 14:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:38.623 14:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:39.563 14:49:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:39.563 14:49:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:39.563 14:49:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.563 14:49:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:39.563 14:49:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:39.563 14:49:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.563 14:49:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.563 14:49:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.563 14:49:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.563 14:49:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.563 14:49:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.563 14:49:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.563 "name": "raid_bdev1", 00:16:39.563 "uuid": "0e4fd8f0-8f9d-408f-95a1-9c719ce3e46f", 00:16:39.563 "strip_size_kb": 64, 00:16:39.563 "state": "online", 00:16:39.563 "raid_level": "raid5f", 00:16:39.563 "superblock": false, 00:16:39.563 "num_base_bdevs": 4, 00:16:39.563 "num_base_bdevs_discovered": 4, 00:16:39.563 "num_base_bdevs_operational": 4, 00:16:39.563 "process": { 00:16:39.563 "type": "rebuild", 00:16:39.563 "target": "spare", 00:16:39.563 "progress": { 00:16:39.563 "blocks": 107520, 00:16:39.563 "percent": 54 00:16:39.563 } 00:16:39.563 }, 00:16:39.563 "base_bdevs_list": [ 00:16:39.563 { 00:16:39.563 "name": "spare", 00:16:39.563 "uuid": "555d02b8-8a27-5479-b520-26e410afb000", 00:16:39.563 "is_configured": true, 00:16:39.563 "data_offset": 0, 00:16:39.563 "data_size": 65536 00:16:39.563 }, 00:16:39.563 { 00:16:39.563 "name": "BaseBdev2", 00:16:39.563 "uuid": "36f43e2e-1b3f-57ee-9378-a72bdac752bb", 00:16:39.563 "is_configured": true, 00:16:39.563 "data_offset": 0, 00:16:39.563 "data_size": 65536 00:16:39.563 }, 00:16:39.563 { 00:16:39.563 "name": "BaseBdev3", 00:16:39.563 "uuid": "23a26a2d-6b5e-5a96-8e9a-6eb8dd00f2e2", 00:16:39.563 "is_configured": true, 00:16:39.563 "data_offset": 0, 00:16:39.563 "data_size": 65536 00:16:39.563 }, 00:16:39.563 { 00:16:39.563 "name": "BaseBdev4", 00:16:39.563 "uuid": "a10e94b6-272a-5bb0-bf07-b27322f8ddf1", 00:16:39.563 "is_configured": true, 00:16:39.563 "data_offset": 0, 00:16:39.563 "data_size": 65536 00:16:39.563 } 00:16:39.563 ] 00:16:39.563 }' 00:16:39.563 14:49:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.823 14:49:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:39.823 14:49:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.823 14:49:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:39.823 14:49:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:40.768 14:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:40.768 14:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:40.768 14:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.768 14:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:40.768 14:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:40.768 14:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.768 14:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.768 14:49:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.768 14:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.768 14:49:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.768 14:49:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.768 14:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.768 "name": "raid_bdev1", 00:16:40.768 "uuid": "0e4fd8f0-8f9d-408f-95a1-9c719ce3e46f", 00:16:40.768 "strip_size_kb": 64, 00:16:40.768 "state": "online", 00:16:40.768 "raid_level": "raid5f", 00:16:40.768 "superblock": false, 00:16:40.768 "num_base_bdevs": 4, 00:16:40.768 "num_base_bdevs_discovered": 4, 00:16:40.768 "num_base_bdevs_operational": 4, 00:16:40.768 "process": { 00:16:40.768 "type": "rebuild", 00:16:40.768 "target": "spare", 00:16:40.768 "progress": { 00:16:40.768 "blocks": 128640, 00:16:40.768 "percent": 65 00:16:40.768 } 00:16:40.768 }, 00:16:40.768 "base_bdevs_list": [ 00:16:40.768 { 00:16:40.768 "name": "spare", 00:16:40.768 "uuid": "555d02b8-8a27-5479-b520-26e410afb000", 00:16:40.768 "is_configured": true, 00:16:40.768 "data_offset": 0, 00:16:40.768 "data_size": 65536 00:16:40.768 }, 00:16:40.768 { 00:16:40.768 "name": "BaseBdev2", 00:16:40.768 "uuid": "36f43e2e-1b3f-57ee-9378-a72bdac752bb", 00:16:40.768 "is_configured": true, 00:16:40.768 "data_offset": 0, 00:16:40.768 "data_size": 65536 00:16:40.768 }, 00:16:40.768 { 00:16:40.768 "name": "BaseBdev3", 00:16:40.768 "uuid": "23a26a2d-6b5e-5a96-8e9a-6eb8dd00f2e2", 00:16:40.768 "is_configured": true, 00:16:40.768 "data_offset": 0, 00:16:40.768 "data_size": 65536 00:16:40.768 }, 00:16:40.768 { 00:16:40.768 "name": "BaseBdev4", 00:16:40.768 "uuid": "a10e94b6-272a-5bb0-bf07-b27322f8ddf1", 00:16:40.768 "is_configured": true, 00:16:40.768 "data_offset": 0, 00:16:40.768 "data_size": 65536 00:16:40.768 } 00:16:40.768 ] 00:16:40.768 }' 00:16:40.768 14:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.768 14:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:40.768 14:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.768 14:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:40.768 14:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:42.151 14:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:42.151 14:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:42.151 14:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.151 14:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:42.151 14:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:42.151 14:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.151 14:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.151 14:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.151 14:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.151 14:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.151 14:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.151 14:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.151 "name": "raid_bdev1", 00:16:42.151 "uuid": "0e4fd8f0-8f9d-408f-95a1-9c719ce3e46f", 00:16:42.151 "strip_size_kb": 64, 00:16:42.151 "state": "online", 00:16:42.151 "raid_level": "raid5f", 00:16:42.151 "superblock": false, 00:16:42.151 "num_base_bdevs": 4, 00:16:42.151 "num_base_bdevs_discovered": 4, 00:16:42.151 "num_base_bdevs_operational": 4, 00:16:42.151 "process": { 00:16:42.151 "type": "rebuild", 00:16:42.151 "target": "spare", 00:16:42.151 "progress": { 00:16:42.151 "blocks": 151680, 00:16:42.151 "percent": 77 00:16:42.151 } 00:16:42.151 }, 00:16:42.151 "base_bdevs_list": [ 00:16:42.151 { 00:16:42.151 "name": "spare", 00:16:42.151 "uuid": "555d02b8-8a27-5479-b520-26e410afb000", 00:16:42.151 "is_configured": true, 00:16:42.151 "data_offset": 0, 00:16:42.151 "data_size": 65536 00:16:42.151 }, 00:16:42.151 { 00:16:42.151 "name": "BaseBdev2", 00:16:42.151 "uuid": "36f43e2e-1b3f-57ee-9378-a72bdac752bb", 00:16:42.151 "is_configured": true, 00:16:42.151 "data_offset": 0, 00:16:42.151 "data_size": 65536 00:16:42.151 }, 00:16:42.151 { 00:16:42.151 "name": "BaseBdev3", 00:16:42.151 "uuid": "23a26a2d-6b5e-5a96-8e9a-6eb8dd00f2e2", 00:16:42.151 "is_configured": true, 00:16:42.151 "data_offset": 0, 00:16:42.151 "data_size": 65536 00:16:42.151 }, 00:16:42.151 { 00:16:42.151 "name": "BaseBdev4", 00:16:42.151 "uuid": "a10e94b6-272a-5bb0-bf07-b27322f8ddf1", 00:16:42.151 "is_configured": true, 00:16:42.151 "data_offset": 0, 00:16:42.151 "data_size": 65536 00:16:42.151 } 00:16:42.151 ] 00:16:42.151 }' 00:16:42.151 14:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.151 14:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:42.151 14:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.151 14:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:42.151 14:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:43.092 14:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:43.092 14:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:43.092 14:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.092 14:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:43.092 14:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:43.092 14:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.092 14:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.092 14:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.092 14:49:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.092 14:49:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.092 14:49:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.092 14:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.092 "name": "raid_bdev1", 00:16:43.092 "uuid": "0e4fd8f0-8f9d-408f-95a1-9c719ce3e46f", 00:16:43.092 "strip_size_kb": 64, 00:16:43.092 "state": "online", 00:16:43.092 "raid_level": "raid5f", 00:16:43.092 "superblock": false, 00:16:43.092 "num_base_bdevs": 4, 00:16:43.092 "num_base_bdevs_discovered": 4, 00:16:43.092 "num_base_bdevs_operational": 4, 00:16:43.092 "process": { 00:16:43.092 "type": "rebuild", 00:16:43.092 "target": "spare", 00:16:43.092 "progress": { 00:16:43.092 "blocks": 172800, 00:16:43.092 "percent": 87 00:16:43.092 } 00:16:43.092 }, 00:16:43.092 "base_bdevs_list": [ 00:16:43.092 { 00:16:43.092 "name": "spare", 00:16:43.092 "uuid": "555d02b8-8a27-5479-b520-26e410afb000", 00:16:43.092 "is_configured": true, 00:16:43.092 "data_offset": 0, 00:16:43.092 "data_size": 65536 00:16:43.092 }, 00:16:43.092 { 00:16:43.092 "name": "BaseBdev2", 00:16:43.092 "uuid": "36f43e2e-1b3f-57ee-9378-a72bdac752bb", 00:16:43.092 "is_configured": true, 00:16:43.092 "data_offset": 0, 00:16:43.092 "data_size": 65536 00:16:43.092 }, 00:16:43.092 { 00:16:43.092 "name": "BaseBdev3", 00:16:43.092 "uuid": "23a26a2d-6b5e-5a96-8e9a-6eb8dd00f2e2", 00:16:43.092 "is_configured": true, 00:16:43.092 "data_offset": 0, 00:16:43.092 "data_size": 65536 00:16:43.092 }, 00:16:43.092 { 00:16:43.092 "name": "BaseBdev4", 00:16:43.092 "uuid": "a10e94b6-272a-5bb0-bf07-b27322f8ddf1", 00:16:43.092 "is_configured": true, 00:16:43.092 "data_offset": 0, 00:16:43.092 "data_size": 65536 00:16:43.092 } 00:16:43.092 ] 00:16:43.092 }' 00:16:43.092 14:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.092 14:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:43.092 14:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.092 14:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:43.092 14:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:44.474 14:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:44.474 14:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:44.474 14:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:44.474 14:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:44.474 14:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:44.474 14:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:44.474 14:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.474 14:49:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.474 14:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.474 14:49:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.474 14:49:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.474 14:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:44.474 "name": "raid_bdev1", 00:16:44.474 "uuid": "0e4fd8f0-8f9d-408f-95a1-9c719ce3e46f", 00:16:44.474 "strip_size_kb": 64, 00:16:44.474 "state": "online", 00:16:44.474 "raid_level": "raid5f", 00:16:44.474 "superblock": false, 00:16:44.474 "num_base_bdevs": 4, 00:16:44.474 "num_base_bdevs_discovered": 4, 00:16:44.474 "num_base_bdevs_operational": 4, 00:16:44.474 "process": { 00:16:44.474 "type": "rebuild", 00:16:44.474 "target": "spare", 00:16:44.475 "progress": { 00:16:44.475 "blocks": 195840, 00:16:44.475 "percent": 99 00:16:44.475 } 00:16:44.475 }, 00:16:44.475 "base_bdevs_list": [ 00:16:44.475 { 00:16:44.475 "name": "spare", 00:16:44.475 "uuid": "555d02b8-8a27-5479-b520-26e410afb000", 00:16:44.475 "is_configured": true, 00:16:44.475 "data_offset": 0, 00:16:44.475 "data_size": 65536 00:16:44.475 }, 00:16:44.475 { 00:16:44.475 "name": "BaseBdev2", 00:16:44.475 "uuid": "36f43e2e-1b3f-57ee-9378-a72bdac752bb", 00:16:44.475 "is_configured": true, 00:16:44.475 "data_offset": 0, 00:16:44.475 "data_size": 65536 00:16:44.475 }, 00:16:44.475 { 00:16:44.475 "name": "BaseBdev3", 00:16:44.475 "uuid": "23a26a2d-6b5e-5a96-8e9a-6eb8dd00f2e2", 00:16:44.475 "is_configured": true, 00:16:44.475 "data_offset": 0, 00:16:44.475 "data_size": 65536 00:16:44.475 }, 00:16:44.475 { 00:16:44.475 "name": "BaseBdev4", 00:16:44.475 "uuid": "a10e94b6-272a-5bb0-bf07-b27322f8ddf1", 00:16:44.475 "is_configured": true, 00:16:44.475 "data_offset": 0, 00:16:44.475 "data_size": 65536 00:16:44.475 } 00:16:44.475 ] 00:16:44.475 }' 00:16:44.475 14:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:44.475 [2024-12-09 14:49:22.285163] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:44.475 [2024-12-09 14:49:22.285281] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:44.475 [2024-12-09 14:49:22.285350] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.475 14:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:44.475 14:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:44.475 14:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:44.475 14:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:45.418 14:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:45.418 14:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:45.418 14:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.418 14:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:45.418 14:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:45.418 14:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.418 14:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.418 14:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.418 14:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.418 14:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.418 14:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.418 14:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.418 "name": "raid_bdev1", 00:16:45.418 "uuid": "0e4fd8f0-8f9d-408f-95a1-9c719ce3e46f", 00:16:45.418 "strip_size_kb": 64, 00:16:45.418 "state": "online", 00:16:45.418 "raid_level": "raid5f", 00:16:45.418 "superblock": false, 00:16:45.418 "num_base_bdevs": 4, 00:16:45.418 "num_base_bdevs_discovered": 4, 00:16:45.418 "num_base_bdevs_operational": 4, 00:16:45.418 "base_bdevs_list": [ 00:16:45.418 { 00:16:45.418 "name": "spare", 00:16:45.418 "uuid": "555d02b8-8a27-5479-b520-26e410afb000", 00:16:45.418 "is_configured": true, 00:16:45.418 "data_offset": 0, 00:16:45.418 "data_size": 65536 00:16:45.418 }, 00:16:45.418 { 00:16:45.418 "name": "BaseBdev2", 00:16:45.418 "uuid": "36f43e2e-1b3f-57ee-9378-a72bdac752bb", 00:16:45.418 "is_configured": true, 00:16:45.418 "data_offset": 0, 00:16:45.418 "data_size": 65536 00:16:45.418 }, 00:16:45.418 { 00:16:45.418 "name": "BaseBdev3", 00:16:45.418 "uuid": "23a26a2d-6b5e-5a96-8e9a-6eb8dd00f2e2", 00:16:45.418 "is_configured": true, 00:16:45.418 "data_offset": 0, 00:16:45.418 "data_size": 65536 00:16:45.418 }, 00:16:45.418 { 00:16:45.418 "name": "BaseBdev4", 00:16:45.418 "uuid": "a10e94b6-272a-5bb0-bf07-b27322f8ddf1", 00:16:45.418 "is_configured": true, 00:16:45.418 "data_offset": 0, 00:16:45.418 "data_size": 65536 00:16:45.418 } 00:16:45.418 ] 00:16:45.418 }' 00:16:45.418 14:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.418 14:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:45.418 14:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.418 14:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:45.418 14:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:45.418 14:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:45.418 14:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.418 14:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:45.418 14:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:45.418 14:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.419 14:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.419 14:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.419 14:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.419 14:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.419 14:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.419 14:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.419 "name": "raid_bdev1", 00:16:45.419 "uuid": "0e4fd8f0-8f9d-408f-95a1-9c719ce3e46f", 00:16:45.419 "strip_size_kb": 64, 00:16:45.419 "state": "online", 00:16:45.419 "raid_level": "raid5f", 00:16:45.419 "superblock": false, 00:16:45.419 "num_base_bdevs": 4, 00:16:45.419 "num_base_bdevs_discovered": 4, 00:16:45.419 "num_base_bdevs_operational": 4, 00:16:45.419 "base_bdevs_list": [ 00:16:45.419 { 00:16:45.419 "name": "spare", 00:16:45.419 "uuid": "555d02b8-8a27-5479-b520-26e410afb000", 00:16:45.419 "is_configured": true, 00:16:45.419 "data_offset": 0, 00:16:45.419 "data_size": 65536 00:16:45.419 }, 00:16:45.419 { 00:16:45.419 "name": "BaseBdev2", 00:16:45.419 "uuid": "36f43e2e-1b3f-57ee-9378-a72bdac752bb", 00:16:45.419 "is_configured": true, 00:16:45.419 "data_offset": 0, 00:16:45.419 "data_size": 65536 00:16:45.419 }, 00:16:45.419 { 00:16:45.419 "name": "BaseBdev3", 00:16:45.419 "uuid": "23a26a2d-6b5e-5a96-8e9a-6eb8dd00f2e2", 00:16:45.419 "is_configured": true, 00:16:45.419 "data_offset": 0, 00:16:45.419 "data_size": 65536 00:16:45.419 }, 00:16:45.419 { 00:16:45.419 "name": "BaseBdev4", 00:16:45.419 "uuid": "a10e94b6-272a-5bb0-bf07-b27322f8ddf1", 00:16:45.419 "is_configured": true, 00:16:45.419 "data_offset": 0, 00:16:45.419 "data_size": 65536 00:16:45.419 } 00:16:45.419 ] 00:16:45.419 }' 00:16:45.419 14:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.687 14:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:45.687 14:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.687 14:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:45.687 14:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:45.687 14:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.687 14:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.687 14:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.687 14:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.687 14:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:45.687 14:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.687 14:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.687 14:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.687 14:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.687 14:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.687 14:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.687 14:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.687 14:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.687 14:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.687 14:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.687 "name": "raid_bdev1", 00:16:45.687 "uuid": "0e4fd8f0-8f9d-408f-95a1-9c719ce3e46f", 00:16:45.687 "strip_size_kb": 64, 00:16:45.687 "state": "online", 00:16:45.688 "raid_level": "raid5f", 00:16:45.688 "superblock": false, 00:16:45.688 "num_base_bdevs": 4, 00:16:45.688 "num_base_bdevs_discovered": 4, 00:16:45.688 "num_base_bdevs_operational": 4, 00:16:45.688 "base_bdevs_list": [ 00:16:45.688 { 00:16:45.688 "name": "spare", 00:16:45.688 "uuid": "555d02b8-8a27-5479-b520-26e410afb000", 00:16:45.688 "is_configured": true, 00:16:45.688 "data_offset": 0, 00:16:45.688 "data_size": 65536 00:16:45.688 }, 00:16:45.688 { 00:16:45.688 "name": "BaseBdev2", 00:16:45.688 "uuid": "36f43e2e-1b3f-57ee-9378-a72bdac752bb", 00:16:45.688 "is_configured": true, 00:16:45.688 "data_offset": 0, 00:16:45.688 "data_size": 65536 00:16:45.688 }, 00:16:45.688 { 00:16:45.688 "name": "BaseBdev3", 00:16:45.688 "uuid": "23a26a2d-6b5e-5a96-8e9a-6eb8dd00f2e2", 00:16:45.688 "is_configured": true, 00:16:45.688 "data_offset": 0, 00:16:45.688 "data_size": 65536 00:16:45.688 }, 00:16:45.688 { 00:16:45.688 "name": "BaseBdev4", 00:16:45.688 "uuid": "a10e94b6-272a-5bb0-bf07-b27322f8ddf1", 00:16:45.688 "is_configured": true, 00:16:45.688 "data_offset": 0, 00:16:45.688 "data_size": 65536 00:16:45.688 } 00:16:45.688 ] 00:16:45.688 }' 00:16:45.688 14:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.688 14:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.947 14:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:45.947 14:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.947 14:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.947 [2024-12-09 14:49:24.063667] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:45.947 [2024-12-09 14:49:24.063779] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:45.947 [2024-12-09 14:49:24.063896] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:45.947 [2024-12-09 14:49:24.064051] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:45.947 [2024-12-09 14:49:24.064113] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:45.947 14:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.207 14:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.207 14:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:46.207 14:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.207 14:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.207 14:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.207 14:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:46.207 14:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:46.207 14:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:46.207 14:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:46.207 14:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:46.207 14:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:46.207 14:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:46.207 14:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:46.207 14:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:46.207 14:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:46.207 14:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:46.207 14:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:46.207 14:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:46.466 /dev/nbd0 00:16:46.466 14:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:46.466 14:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:46.466 14:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:46.466 14:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:46.466 14:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:46.466 14:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:46.466 14:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:46.466 14:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:46.466 14:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:46.466 14:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:46.466 14:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:46.466 1+0 records in 00:16:46.466 1+0 records out 00:16:46.466 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000514804 s, 8.0 MB/s 00:16:46.466 14:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:46.466 14:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:46.466 14:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:46.466 14:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:46.466 14:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:46.466 14:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:46.466 14:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:46.466 14:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:46.466 /dev/nbd1 00:16:46.726 14:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:46.726 14:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:46.726 14:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:46.726 14:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:46.726 14:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:46.726 14:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:46.726 14:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:46.726 14:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:46.726 14:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:46.726 14:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:46.726 14:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:46.726 1+0 records in 00:16:46.726 1+0 records out 00:16:46.726 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529449 s, 7.7 MB/s 00:16:46.726 14:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:46.726 14:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:46.726 14:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:46.726 14:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:46.726 14:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:46.726 14:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:46.726 14:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:46.726 14:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:46.726 14:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:46.726 14:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:46.726 14:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:46.726 14:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:46.726 14:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:46.726 14:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:46.726 14:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:46.985 14:49:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:46.985 14:49:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:46.985 14:49:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:46.985 14:49:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:46.985 14:49:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:46.985 14:49:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:46.985 14:49:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:46.985 14:49:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:46.985 14:49:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:46.985 14:49:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:47.243 14:49:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:47.243 14:49:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:47.243 14:49:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:47.243 14:49:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:47.243 14:49:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:47.243 14:49:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:47.243 14:49:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:47.243 14:49:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:47.243 14:49:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:47.243 14:49:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85910 00:16:47.244 14:49:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 85910 ']' 00:16:47.244 14:49:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 85910 00:16:47.244 14:49:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:47.244 14:49:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:47.244 14:49:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85910 00:16:47.244 killing process with pid 85910 00:16:47.244 Received shutdown signal, test time was about 60.000000 seconds 00:16:47.244 00:16:47.244 Latency(us) 00:16:47.244 [2024-12-09T14:49:25.366Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:47.244 [2024-12-09T14:49:25.366Z] =================================================================================================================== 00:16:47.244 [2024-12-09T14:49:25.366Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:47.244 14:49:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:47.244 14:49:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:47.244 14:49:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85910' 00:16:47.244 14:49:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 85910 00:16:47.244 [2024-12-09 14:49:25.328720] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:47.244 14:49:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 85910 00:16:47.812 [2024-12-09 14:49:25.804549] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:49.190 14:49:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:49.190 00:16:49.190 real 0m20.132s 00:16:49.190 user 0m24.054s 00:16:49.190 sys 0m2.289s 00:16:49.190 ************************************ 00:16:49.190 END TEST raid5f_rebuild_test 00:16:49.190 ************************************ 00:16:49.190 14:49:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:49.190 14:49:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.190 14:49:26 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:16:49.190 14:49:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:49.190 14:49:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:49.190 14:49:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:49.190 ************************************ 00:16:49.190 START TEST raid5f_rebuild_test_sb 00:16:49.190 ************************************ 00:16:49.190 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:16:49.190 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:49.190 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:49.190 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:49.190 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:49.190 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:49.190 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:49.191 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:49.191 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:49.191 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:49.191 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:49.191 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:49.191 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:49.191 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:49.191 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:49.191 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:49.191 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:49.191 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:49.191 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:49.191 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:49.191 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:49.191 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:49.191 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:49.191 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:49.191 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:49.191 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:49.191 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:49.191 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:49.191 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:49.191 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:49.191 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:49.191 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:49.191 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:49.191 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=86430 00:16:49.191 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 86430 00:16:49.191 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:49.191 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 86430 ']' 00:16:49.191 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.191 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:49.191 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.191 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:49.191 14:49:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.191 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:49.191 Zero copy mechanism will not be used. 00:16:49.191 [2024-12-09 14:49:27.073979] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:16:49.191 [2024-12-09 14:49:27.074116] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86430 ] 00:16:49.191 [2024-12-09 14:49:27.249463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.450 [2024-12-09 14:49:27.367208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.450 [2024-12-09 14:49:27.570788] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:49.450 [2024-12-09 14:49:27.570835] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:50.019 14:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:50.019 14:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:50.019 14:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:50.020 14:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:50.020 14:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.020 14:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.020 BaseBdev1_malloc 00:16:50.020 14:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.020 14:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:50.020 14:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.020 14:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.020 [2024-12-09 14:49:27.940243] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:50.020 [2024-12-09 14:49:27.940304] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.020 [2024-12-09 14:49:27.940325] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:50.020 [2024-12-09 14:49:27.940336] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.020 [2024-12-09 14:49:27.942432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.020 [2024-12-09 14:49:27.942472] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:50.020 BaseBdev1 00:16:50.020 14:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.020 14:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:50.020 14:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:50.020 14:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.020 14:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.020 BaseBdev2_malloc 00:16:50.020 14:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.020 14:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:50.020 14:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.020 14:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.020 [2024-12-09 14:49:27.995877] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:50.020 [2024-12-09 14:49:27.995983] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.020 [2024-12-09 14:49:27.996010] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:50.020 [2024-12-09 14:49:27.996021] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.020 [2024-12-09 14:49:27.998108] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.020 [2024-12-09 14:49:27.998149] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:50.020 BaseBdev2 00:16:50.020 14:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.020 14:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:50.020 14:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:50.020 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.020 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.020 BaseBdev3_malloc 00:16:50.020 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.020 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:50.020 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.020 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.020 [2024-12-09 14:49:28.063536] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:50.020 [2024-12-09 14:49:28.063606] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.020 [2024-12-09 14:49:28.063630] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:50.020 [2024-12-09 14:49:28.063641] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.020 [2024-12-09 14:49:28.065739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.020 [2024-12-09 14:49:28.065774] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:50.020 BaseBdev3 00:16:50.020 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.020 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:50.020 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:50.020 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.020 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.020 BaseBdev4_malloc 00:16:50.020 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.020 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:50.020 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.020 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.020 [2024-12-09 14:49:28.119283] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:50.020 [2024-12-09 14:49:28.119343] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.020 [2024-12-09 14:49:28.119364] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:50.020 [2024-12-09 14:49:28.119375] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.020 [2024-12-09 14:49:28.121481] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.020 [2024-12-09 14:49:28.121524] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:50.020 BaseBdev4 00:16:50.020 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.020 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:50.020 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.020 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.279 spare_malloc 00:16:50.279 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.279 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:50.279 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.279 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.279 spare_delay 00:16:50.279 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.279 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:50.280 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.280 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.280 [2024-12-09 14:49:28.185838] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:50.280 [2024-12-09 14:49:28.185888] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.280 [2024-12-09 14:49:28.185905] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:50.280 [2024-12-09 14:49:28.185915] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.280 [2024-12-09 14:49:28.187988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.280 [2024-12-09 14:49:28.188029] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:50.280 spare 00:16:50.280 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.280 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:50.280 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.280 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.280 [2024-12-09 14:49:28.197869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:50.280 [2024-12-09 14:49:28.199630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:50.280 [2024-12-09 14:49:28.199688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:50.280 [2024-12-09 14:49:28.199737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:50.280 [2024-12-09 14:49:28.199922] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:50.280 [2024-12-09 14:49:28.199935] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:50.280 [2024-12-09 14:49:28.200176] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:50.280 [2024-12-09 14:49:28.207465] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:50.280 [2024-12-09 14:49:28.207522] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:50.280 [2024-12-09 14:49:28.207751] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.280 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.280 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:50.280 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.280 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.280 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:50.280 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.280 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:50.280 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.280 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.280 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.280 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.280 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.280 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.280 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.280 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.280 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.280 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.280 "name": "raid_bdev1", 00:16:50.280 "uuid": "efd502af-95ac-4e71-bcbd-b88a5ae8d8e2", 00:16:50.280 "strip_size_kb": 64, 00:16:50.280 "state": "online", 00:16:50.280 "raid_level": "raid5f", 00:16:50.280 "superblock": true, 00:16:50.280 "num_base_bdevs": 4, 00:16:50.280 "num_base_bdevs_discovered": 4, 00:16:50.280 "num_base_bdevs_operational": 4, 00:16:50.280 "base_bdevs_list": [ 00:16:50.280 { 00:16:50.280 "name": "BaseBdev1", 00:16:50.280 "uuid": "29ad8f12-576f-5a9c-a1c1-3d4071ccf720", 00:16:50.280 "is_configured": true, 00:16:50.280 "data_offset": 2048, 00:16:50.280 "data_size": 63488 00:16:50.280 }, 00:16:50.280 { 00:16:50.280 "name": "BaseBdev2", 00:16:50.280 "uuid": "516864e0-1e37-5afe-a1d1-1d30874f4381", 00:16:50.280 "is_configured": true, 00:16:50.280 "data_offset": 2048, 00:16:50.280 "data_size": 63488 00:16:50.280 }, 00:16:50.280 { 00:16:50.280 "name": "BaseBdev3", 00:16:50.280 "uuid": "864d8b3e-c25e-57c8-870c-db01ab86bcc0", 00:16:50.280 "is_configured": true, 00:16:50.280 "data_offset": 2048, 00:16:50.280 "data_size": 63488 00:16:50.280 }, 00:16:50.280 { 00:16:50.280 "name": "BaseBdev4", 00:16:50.280 "uuid": "1081f5c8-44c2-5fa7-946f-6801c6a7a795", 00:16:50.280 "is_configured": true, 00:16:50.280 "data_offset": 2048, 00:16:50.280 "data_size": 63488 00:16:50.280 } 00:16:50.280 ] 00:16:50.280 }' 00:16:50.280 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.280 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.849 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:50.849 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:50.849 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.849 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.849 [2024-12-09 14:49:28.691556] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:50.849 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.849 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:16:50.849 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.849 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:50.849 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.849 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.849 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.849 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:50.849 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:50.849 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:50.849 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:50.849 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:50.849 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:50.849 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:50.849 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:50.850 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:50.850 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:50.850 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:50.850 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:50.850 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:50.850 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:50.850 [2024-12-09 14:49:28.946943] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:51.109 /dev/nbd0 00:16:51.109 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:51.109 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:51.109 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:51.109 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:51.109 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:51.109 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:51.110 14:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:51.110 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:51.110 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:51.110 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:51.110 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:51.110 1+0 records in 00:16:51.110 1+0 records out 00:16:51.110 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000356692 s, 11.5 MB/s 00:16:51.110 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:51.110 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:51.110 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:51.110 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:51.110 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:51.110 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:51.110 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:51.110 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:51.110 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:51.110 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:51.110 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:16:51.678 496+0 records in 00:16:51.678 496+0 records out 00:16:51.678 97517568 bytes (98 MB, 93 MiB) copied, 0.469083 s, 208 MB/s 00:16:51.678 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:51.678 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:51.678 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:51.679 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:51.679 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:51.679 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:51.679 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:51.679 [2024-12-09 14:49:29.709856] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:51.679 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:51.679 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:51.679 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:51.679 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:51.679 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:51.679 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:51.679 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:51.679 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:51.679 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:51.679 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.679 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.679 [2024-12-09 14:49:29.732640] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:51.679 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.679 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:51.679 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:51.679 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:51.679 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.679 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.679 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:51.679 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.679 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.679 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.679 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.679 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.679 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.679 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.679 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.679 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.679 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.679 "name": "raid_bdev1", 00:16:51.679 "uuid": "efd502af-95ac-4e71-bcbd-b88a5ae8d8e2", 00:16:51.679 "strip_size_kb": 64, 00:16:51.679 "state": "online", 00:16:51.679 "raid_level": "raid5f", 00:16:51.679 "superblock": true, 00:16:51.679 "num_base_bdevs": 4, 00:16:51.679 "num_base_bdevs_discovered": 3, 00:16:51.679 "num_base_bdevs_operational": 3, 00:16:51.679 "base_bdevs_list": [ 00:16:51.679 { 00:16:51.679 "name": null, 00:16:51.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.679 "is_configured": false, 00:16:51.679 "data_offset": 0, 00:16:51.679 "data_size": 63488 00:16:51.679 }, 00:16:51.679 { 00:16:51.679 "name": "BaseBdev2", 00:16:51.679 "uuid": "516864e0-1e37-5afe-a1d1-1d30874f4381", 00:16:51.679 "is_configured": true, 00:16:51.679 "data_offset": 2048, 00:16:51.679 "data_size": 63488 00:16:51.679 }, 00:16:51.679 { 00:16:51.679 "name": "BaseBdev3", 00:16:51.679 "uuid": "864d8b3e-c25e-57c8-870c-db01ab86bcc0", 00:16:51.679 "is_configured": true, 00:16:51.679 "data_offset": 2048, 00:16:51.679 "data_size": 63488 00:16:51.679 }, 00:16:51.679 { 00:16:51.679 "name": "BaseBdev4", 00:16:51.679 "uuid": "1081f5c8-44c2-5fa7-946f-6801c6a7a795", 00:16:51.679 "is_configured": true, 00:16:51.679 "data_offset": 2048, 00:16:51.679 "data_size": 63488 00:16:51.679 } 00:16:51.679 ] 00:16:51.679 }' 00:16:51.679 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.679 14:49:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.247 14:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:52.247 14:49:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.247 14:49:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.247 [2024-12-09 14:49:30.179885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:52.247 [2024-12-09 14:49:30.196443] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:16:52.247 14:49:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.247 14:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:52.247 [2024-12-09 14:49:30.205739] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:53.257 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:53.257 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:53.257 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:53.257 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:53.257 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:53.257 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.257 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.257 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.257 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.257 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.257 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.257 "name": "raid_bdev1", 00:16:53.257 "uuid": "efd502af-95ac-4e71-bcbd-b88a5ae8d8e2", 00:16:53.257 "strip_size_kb": 64, 00:16:53.257 "state": "online", 00:16:53.257 "raid_level": "raid5f", 00:16:53.257 "superblock": true, 00:16:53.257 "num_base_bdevs": 4, 00:16:53.257 "num_base_bdevs_discovered": 4, 00:16:53.257 "num_base_bdevs_operational": 4, 00:16:53.257 "process": { 00:16:53.257 "type": "rebuild", 00:16:53.257 "target": "spare", 00:16:53.257 "progress": { 00:16:53.257 "blocks": 19200, 00:16:53.257 "percent": 10 00:16:53.257 } 00:16:53.257 }, 00:16:53.257 "base_bdevs_list": [ 00:16:53.257 { 00:16:53.257 "name": "spare", 00:16:53.257 "uuid": "54edf607-3f40-5153-93ce-27329de11cc3", 00:16:53.257 "is_configured": true, 00:16:53.257 "data_offset": 2048, 00:16:53.257 "data_size": 63488 00:16:53.257 }, 00:16:53.257 { 00:16:53.257 "name": "BaseBdev2", 00:16:53.257 "uuid": "516864e0-1e37-5afe-a1d1-1d30874f4381", 00:16:53.257 "is_configured": true, 00:16:53.257 "data_offset": 2048, 00:16:53.257 "data_size": 63488 00:16:53.257 }, 00:16:53.257 { 00:16:53.257 "name": "BaseBdev3", 00:16:53.257 "uuid": "864d8b3e-c25e-57c8-870c-db01ab86bcc0", 00:16:53.257 "is_configured": true, 00:16:53.257 "data_offset": 2048, 00:16:53.257 "data_size": 63488 00:16:53.257 }, 00:16:53.257 { 00:16:53.257 "name": "BaseBdev4", 00:16:53.257 "uuid": "1081f5c8-44c2-5fa7-946f-6801c6a7a795", 00:16:53.257 "is_configured": true, 00:16:53.257 "data_offset": 2048, 00:16:53.257 "data_size": 63488 00:16:53.257 } 00:16:53.257 ] 00:16:53.257 }' 00:16:53.257 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.257 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:53.257 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.257 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:53.257 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:53.257 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.257 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.257 [2024-12-09 14:49:31.360511] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:53.517 [2024-12-09 14:49:31.414249] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:53.517 [2024-12-09 14:49:31.414365] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:53.517 [2024-12-09 14:49:31.414383] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:53.517 [2024-12-09 14:49:31.414393] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:53.517 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.517 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:53.517 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:53.517 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.517 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:53.517 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.517 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:53.517 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.517 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.517 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.517 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.517 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.517 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.517 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.517 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.517 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.517 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.517 "name": "raid_bdev1", 00:16:53.517 "uuid": "efd502af-95ac-4e71-bcbd-b88a5ae8d8e2", 00:16:53.517 "strip_size_kb": 64, 00:16:53.517 "state": "online", 00:16:53.517 "raid_level": "raid5f", 00:16:53.517 "superblock": true, 00:16:53.517 "num_base_bdevs": 4, 00:16:53.517 "num_base_bdevs_discovered": 3, 00:16:53.517 "num_base_bdevs_operational": 3, 00:16:53.517 "base_bdevs_list": [ 00:16:53.517 { 00:16:53.517 "name": null, 00:16:53.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.517 "is_configured": false, 00:16:53.517 "data_offset": 0, 00:16:53.517 "data_size": 63488 00:16:53.517 }, 00:16:53.517 { 00:16:53.517 "name": "BaseBdev2", 00:16:53.517 "uuid": "516864e0-1e37-5afe-a1d1-1d30874f4381", 00:16:53.517 "is_configured": true, 00:16:53.517 "data_offset": 2048, 00:16:53.517 "data_size": 63488 00:16:53.517 }, 00:16:53.517 { 00:16:53.517 "name": "BaseBdev3", 00:16:53.517 "uuid": "864d8b3e-c25e-57c8-870c-db01ab86bcc0", 00:16:53.517 "is_configured": true, 00:16:53.517 "data_offset": 2048, 00:16:53.517 "data_size": 63488 00:16:53.517 }, 00:16:53.517 { 00:16:53.517 "name": "BaseBdev4", 00:16:53.517 "uuid": "1081f5c8-44c2-5fa7-946f-6801c6a7a795", 00:16:53.517 "is_configured": true, 00:16:53.517 "data_offset": 2048, 00:16:53.517 "data_size": 63488 00:16:53.517 } 00:16:53.517 ] 00:16:53.517 }' 00:16:53.517 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.517 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.778 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:53.778 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:53.778 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:53.778 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:53.778 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:53.778 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.778 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.778 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.778 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.037 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.037 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.037 "name": "raid_bdev1", 00:16:54.037 "uuid": "efd502af-95ac-4e71-bcbd-b88a5ae8d8e2", 00:16:54.037 "strip_size_kb": 64, 00:16:54.037 "state": "online", 00:16:54.037 "raid_level": "raid5f", 00:16:54.037 "superblock": true, 00:16:54.037 "num_base_bdevs": 4, 00:16:54.037 "num_base_bdevs_discovered": 3, 00:16:54.037 "num_base_bdevs_operational": 3, 00:16:54.037 "base_bdevs_list": [ 00:16:54.037 { 00:16:54.037 "name": null, 00:16:54.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.037 "is_configured": false, 00:16:54.037 "data_offset": 0, 00:16:54.037 "data_size": 63488 00:16:54.037 }, 00:16:54.037 { 00:16:54.037 "name": "BaseBdev2", 00:16:54.037 "uuid": "516864e0-1e37-5afe-a1d1-1d30874f4381", 00:16:54.037 "is_configured": true, 00:16:54.037 "data_offset": 2048, 00:16:54.037 "data_size": 63488 00:16:54.037 }, 00:16:54.037 { 00:16:54.037 "name": "BaseBdev3", 00:16:54.037 "uuid": "864d8b3e-c25e-57c8-870c-db01ab86bcc0", 00:16:54.037 "is_configured": true, 00:16:54.037 "data_offset": 2048, 00:16:54.037 "data_size": 63488 00:16:54.037 }, 00:16:54.037 { 00:16:54.037 "name": "BaseBdev4", 00:16:54.037 "uuid": "1081f5c8-44c2-5fa7-946f-6801c6a7a795", 00:16:54.037 "is_configured": true, 00:16:54.037 "data_offset": 2048, 00:16:54.037 "data_size": 63488 00:16:54.037 } 00:16:54.037 ] 00:16:54.037 }' 00:16:54.037 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.037 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:54.037 14:49:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:54.037 14:49:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:54.037 14:49:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:54.037 14:49:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.037 14:49:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.037 [2024-12-09 14:49:32.043842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:54.037 [2024-12-09 14:49:32.059150] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:16:54.037 14:49:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.037 14:49:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:54.037 [2024-12-09 14:49:32.068882] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:54.976 14:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:54.976 14:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.976 14:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:54.976 14:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:54.976 14:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.976 14:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.976 14:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.976 14:49:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.976 14:49:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.976 14:49:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.236 14:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.236 "name": "raid_bdev1", 00:16:55.236 "uuid": "efd502af-95ac-4e71-bcbd-b88a5ae8d8e2", 00:16:55.236 "strip_size_kb": 64, 00:16:55.236 "state": "online", 00:16:55.236 "raid_level": "raid5f", 00:16:55.236 "superblock": true, 00:16:55.236 "num_base_bdevs": 4, 00:16:55.236 "num_base_bdevs_discovered": 4, 00:16:55.236 "num_base_bdevs_operational": 4, 00:16:55.236 "process": { 00:16:55.236 "type": "rebuild", 00:16:55.236 "target": "spare", 00:16:55.236 "progress": { 00:16:55.236 "blocks": 17280, 00:16:55.236 "percent": 9 00:16:55.236 } 00:16:55.236 }, 00:16:55.236 "base_bdevs_list": [ 00:16:55.236 { 00:16:55.236 "name": "spare", 00:16:55.236 "uuid": "54edf607-3f40-5153-93ce-27329de11cc3", 00:16:55.236 "is_configured": true, 00:16:55.236 "data_offset": 2048, 00:16:55.236 "data_size": 63488 00:16:55.236 }, 00:16:55.236 { 00:16:55.236 "name": "BaseBdev2", 00:16:55.236 "uuid": "516864e0-1e37-5afe-a1d1-1d30874f4381", 00:16:55.236 "is_configured": true, 00:16:55.236 "data_offset": 2048, 00:16:55.236 "data_size": 63488 00:16:55.236 }, 00:16:55.236 { 00:16:55.236 "name": "BaseBdev3", 00:16:55.236 "uuid": "864d8b3e-c25e-57c8-870c-db01ab86bcc0", 00:16:55.236 "is_configured": true, 00:16:55.236 "data_offset": 2048, 00:16:55.236 "data_size": 63488 00:16:55.236 }, 00:16:55.236 { 00:16:55.236 "name": "BaseBdev4", 00:16:55.236 "uuid": "1081f5c8-44c2-5fa7-946f-6801c6a7a795", 00:16:55.236 "is_configured": true, 00:16:55.236 "data_offset": 2048, 00:16:55.236 "data_size": 63488 00:16:55.236 } 00:16:55.236 ] 00:16:55.236 }' 00:16:55.236 14:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.236 14:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:55.236 14:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.236 14:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:55.236 14:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:55.236 14:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:55.236 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:55.236 14:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:55.236 14:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:55.236 14:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=645 00:16:55.236 14:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:55.236 14:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:55.236 14:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.236 14:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:55.236 14:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:55.236 14:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.236 14:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.236 14:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.236 14:49:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.236 14:49:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.236 14:49:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.236 14:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.236 "name": "raid_bdev1", 00:16:55.236 "uuid": "efd502af-95ac-4e71-bcbd-b88a5ae8d8e2", 00:16:55.236 "strip_size_kb": 64, 00:16:55.236 "state": "online", 00:16:55.236 "raid_level": "raid5f", 00:16:55.236 "superblock": true, 00:16:55.236 "num_base_bdevs": 4, 00:16:55.236 "num_base_bdevs_discovered": 4, 00:16:55.236 "num_base_bdevs_operational": 4, 00:16:55.236 "process": { 00:16:55.236 "type": "rebuild", 00:16:55.236 "target": "spare", 00:16:55.236 "progress": { 00:16:55.236 "blocks": 21120, 00:16:55.236 "percent": 11 00:16:55.236 } 00:16:55.236 }, 00:16:55.236 "base_bdevs_list": [ 00:16:55.236 { 00:16:55.236 "name": "spare", 00:16:55.236 "uuid": "54edf607-3f40-5153-93ce-27329de11cc3", 00:16:55.236 "is_configured": true, 00:16:55.236 "data_offset": 2048, 00:16:55.236 "data_size": 63488 00:16:55.236 }, 00:16:55.236 { 00:16:55.236 "name": "BaseBdev2", 00:16:55.236 "uuid": "516864e0-1e37-5afe-a1d1-1d30874f4381", 00:16:55.236 "is_configured": true, 00:16:55.236 "data_offset": 2048, 00:16:55.236 "data_size": 63488 00:16:55.236 }, 00:16:55.236 { 00:16:55.236 "name": "BaseBdev3", 00:16:55.236 "uuid": "864d8b3e-c25e-57c8-870c-db01ab86bcc0", 00:16:55.236 "is_configured": true, 00:16:55.236 "data_offset": 2048, 00:16:55.236 "data_size": 63488 00:16:55.236 }, 00:16:55.236 { 00:16:55.236 "name": "BaseBdev4", 00:16:55.236 "uuid": "1081f5c8-44c2-5fa7-946f-6801c6a7a795", 00:16:55.236 "is_configured": true, 00:16:55.236 "data_offset": 2048, 00:16:55.236 "data_size": 63488 00:16:55.236 } 00:16:55.236 ] 00:16:55.236 }' 00:16:55.236 14:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.236 14:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:55.236 14:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.236 14:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:55.237 14:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:56.618 14:49:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:56.618 14:49:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:56.618 14:49:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:56.618 14:49:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:56.618 14:49:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:56.618 14:49:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:56.618 14:49:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.618 14:49:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.618 14:49:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.618 14:49:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.618 14:49:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.618 14:49:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:56.618 "name": "raid_bdev1", 00:16:56.618 "uuid": "efd502af-95ac-4e71-bcbd-b88a5ae8d8e2", 00:16:56.618 "strip_size_kb": 64, 00:16:56.618 "state": "online", 00:16:56.618 "raid_level": "raid5f", 00:16:56.618 "superblock": true, 00:16:56.618 "num_base_bdevs": 4, 00:16:56.618 "num_base_bdevs_discovered": 4, 00:16:56.618 "num_base_bdevs_operational": 4, 00:16:56.618 "process": { 00:16:56.618 "type": "rebuild", 00:16:56.618 "target": "spare", 00:16:56.618 "progress": { 00:16:56.618 "blocks": 42240, 00:16:56.618 "percent": 22 00:16:56.618 } 00:16:56.618 }, 00:16:56.618 "base_bdevs_list": [ 00:16:56.618 { 00:16:56.618 "name": "spare", 00:16:56.618 "uuid": "54edf607-3f40-5153-93ce-27329de11cc3", 00:16:56.618 "is_configured": true, 00:16:56.618 "data_offset": 2048, 00:16:56.618 "data_size": 63488 00:16:56.618 }, 00:16:56.618 { 00:16:56.618 "name": "BaseBdev2", 00:16:56.618 "uuid": "516864e0-1e37-5afe-a1d1-1d30874f4381", 00:16:56.618 "is_configured": true, 00:16:56.618 "data_offset": 2048, 00:16:56.618 "data_size": 63488 00:16:56.618 }, 00:16:56.618 { 00:16:56.618 "name": "BaseBdev3", 00:16:56.618 "uuid": "864d8b3e-c25e-57c8-870c-db01ab86bcc0", 00:16:56.618 "is_configured": true, 00:16:56.618 "data_offset": 2048, 00:16:56.618 "data_size": 63488 00:16:56.618 }, 00:16:56.618 { 00:16:56.618 "name": "BaseBdev4", 00:16:56.618 "uuid": "1081f5c8-44c2-5fa7-946f-6801c6a7a795", 00:16:56.618 "is_configured": true, 00:16:56.618 "data_offset": 2048, 00:16:56.618 "data_size": 63488 00:16:56.618 } 00:16:56.618 ] 00:16:56.618 }' 00:16:56.618 14:49:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:56.618 14:49:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:56.619 14:49:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:56.619 14:49:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:56.619 14:49:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:57.559 14:49:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:57.559 14:49:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:57.559 14:49:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.559 14:49:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:57.559 14:49:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:57.559 14:49:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.559 14:49:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.559 14:49:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.559 14:49:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.559 14:49:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.559 14:49:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.559 14:49:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.559 "name": "raid_bdev1", 00:16:57.559 "uuid": "efd502af-95ac-4e71-bcbd-b88a5ae8d8e2", 00:16:57.559 "strip_size_kb": 64, 00:16:57.559 "state": "online", 00:16:57.559 "raid_level": "raid5f", 00:16:57.559 "superblock": true, 00:16:57.559 "num_base_bdevs": 4, 00:16:57.559 "num_base_bdevs_discovered": 4, 00:16:57.559 "num_base_bdevs_operational": 4, 00:16:57.559 "process": { 00:16:57.559 "type": "rebuild", 00:16:57.559 "target": "spare", 00:16:57.559 "progress": { 00:16:57.559 "blocks": 65280, 00:16:57.559 "percent": 34 00:16:57.559 } 00:16:57.559 }, 00:16:57.559 "base_bdevs_list": [ 00:16:57.559 { 00:16:57.559 "name": "spare", 00:16:57.559 "uuid": "54edf607-3f40-5153-93ce-27329de11cc3", 00:16:57.559 "is_configured": true, 00:16:57.559 "data_offset": 2048, 00:16:57.559 "data_size": 63488 00:16:57.559 }, 00:16:57.559 { 00:16:57.559 "name": "BaseBdev2", 00:16:57.559 "uuid": "516864e0-1e37-5afe-a1d1-1d30874f4381", 00:16:57.559 "is_configured": true, 00:16:57.559 "data_offset": 2048, 00:16:57.559 "data_size": 63488 00:16:57.559 }, 00:16:57.559 { 00:16:57.559 "name": "BaseBdev3", 00:16:57.559 "uuid": "864d8b3e-c25e-57c8-870c-db01ab86bcc0", 00:16:57.559 "is_configured": true, 00:16:57.559 "data_offset": 2048, 00:16:57.559 "data_size": 63488 00:16:57.559 }, 00:16:57.559 { 00:16:57.559 "name": "BaseBdev4", 00:16:57.559 "uuid": "1081f5c8-44c2-5fa7-946f-6801c6a7a795", 00:16:57.559 "is_configured": true, 00:16:57.559 "data_offset": 2048, 00:16:57.559 "data_size": 63488 00:16:57.559 } 00:16:57.559 ] 00:16:57.559 }' 00:16:57.559 14:49:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.559 14:49:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:57.559 14:49:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.559 14:49:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:57.559 14:49:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:58.499 14:49:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:58.499 14:49:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:58.499 14:49:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:58.499 14:49:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:58.499 14:49:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:58.499 14:49:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:58.499 14:49:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.499 14:49:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.499 14:49:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.499 14:49:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.759 14:49:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.759 14:49:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:58.759 "name": "raid_bdev1", 00:16:58.759 "uuid": "efd502af-95ac-4e71-bcbd-b88a5ae8d8e2", 00:16:58.759 "strip_size_kb": 64, 00:16:58.759 "state": "online", 00:16:58.759 "raid_level": "raid5f", 00:16:58.759 "superblock": true, 00:16:58.759 "num_base_bdevs": 4, 00:16:58.759 "num_base_bdevs_discovered": 4, 00:16:58.759 "num_base_bdevs_operational": 4, 00:16:58.759 "process": { 00:16:58.759 "type": "rebuild", 00:16:58.759 "target": "spare", 00:16:58.759 "progress": { 00:16:58.759 "blocks": 86400, 00:16:58.759 "percent": 45 00:16:58.759 } 00:16:58.759 }, 00:16:58.759 "base_bdevs_list": [ 00:16:58.759 { 00:16:58.759 "name": "spare", 00:16:58.759 "uuid": "54edf607-3f40-5153-93ce-27329de11cc3", 00:16:58.759 "is_configured": true, 00:16:58.759 "data_offset": 2048, 00:16:58.759 "data_size": 63488 00:16:58.759 }, 00:16:58.759 { 00:16:58.759 "name": "BaseBdev2", 00:16:58.759 "uuid": "516864e0-1e37-5afe-a1d1-1d30874f4381", 00:16:58.759 "is_configured": true, 00:16:58.759 "data_offset": 2048, 00:16:58.759 "data_size": 63488 00:16:58.759 }, 00:16:58.759 { 00:16:58.759 "name": "BaseBdev3", 00:16:58.759 "uuid": "864d8b3e-c25e-57c8-870c-db01ab86bcc0", 00:16:58.759 "is_configured": true, 00:16:58.759 "data_offset": 2048, 00:16:58.759 "data_size": 63488 00:16:58.759 }, 00:16:58.759 { 00:16:58.759 "name": "BaseBdev4", 00:16:58.759 "uuid": "1081f5c8-44c2-5fa7-946f-6801c6a7a795", 00:16:58.759 "is_configured": true, 00:16:58.759 "data_offset": 2048, 00:16:58.759 "data_size": 63488 00:16:58.759 } 00:16:58.759 ] 00:16:58.759 }' 00:16:58.759 14:49:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:58.759 14:49:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:58.759 14:49:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:58.759 14:49:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:58.759 14:49:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:59.699 14:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:59.699 14:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:59.699 14:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.699 14:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:59.699 14:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:59.699 14:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.699 14:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.699 14:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.699 14:49:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.699 14:49:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.699 14:49:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.699 14:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.699 "name": "raid_bdev1", 00:16:59.699 "uuid": "efd502af-95ac-4e71-bcbd-b88a5ae8d8e2", 00:16:59.699 "strip_size_kb": 64, 00:16:59.699 "state": "online", 00:16:59.699 "raid_level": "raid5f", 00:16:59.699 "superblock": true, 00:16:59.699 "num_base_bdevs": 4, 00:16:59.699 "num_base_bdevs_discovered": 4, 00:16:59.699 "num_base_bdevs_operational": 4, 00:16:59.699 "process": { 00:16:59.699 "type": "rebuild", 00:16:59.699 "target": "spare", 00:16:59.699 "progress": { 00:16:59.699 "blocks": 107520, 00:16:59.699 "percent": 56 00:16:59.699 } 00:16:59.699 }, 00:16:59.699 "base_bdevs_list": [ 00:16:59.699 { 00:16:59.699 "name": "spare", 00:16:59.699 "uuid": "54edf607-3f40-5153-93ce-27329de11cc3", 00:16:59.699 "is_configured": true, 00:16:59.699 "data_offset": 2048, 00:16:59.699 "data_size": 63488 00:16:59.699 }, 00:16:59.699 { 00:16:59.699 "name": "BaseBdev2", 00:16:59.699 "uuid": "516864e0-1e37-5afe-a1d1-1d30874f4381", 00:16:59.699 "is_configured": true, 00:16:59.699 "data_offset": 2048, 00:16:59.699 "data_size": 63488 00:16:59.699 }, 00:16:59.699 { 00:16:59.699 "name": "BaseBdev3", 00:16:59.699 "uuid": "864d8b3e-c25e-57c8-870c-db01ab86bcc0", 00:16:59.699 "is_configured": true, 00:16:59.699 "data_offset": 2048, 00:16:59.699 "data_size": 63488 00:16:59.699 }, 00:16:59.699 { 00:16:59.699 "name": "BaseBdev4", 00:16:59.699 "uuid": "1081f5c8-44c2-5fa7-946f-6801c6a7a795", 00:16:59.699 "is_configured": true, 00:16:59.699 "data_offset": 2048, 00:16:59.699 "data_size": 63488 00:16:59.699 } 00:16:59.699 ] 00:16:59.699 }' 00:16:59.699 14:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.959 14:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:59.959 14:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.959 14:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:59.959 14:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:00.899 14:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:00.899 14:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:00.899 14:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:00.899 14:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:00.899 14:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:00.899 14:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:00.899 14:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.899 14:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.899 14:49:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.899 14:49:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.899 14:49:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.899 14:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:00.899 "name": "raid_bdev1", 00:17:00.899 "uuid": "efd502af-95ac-4e71-bcbd-b88a5ae8d8e2", 00:17:00.899 "strip_size_kb": 64, 00:17:00.899 "state": "online", 00:17:00.899 "raid_level": "raid5f", 00:17:00.899 "superblock": true, 00:17:00.899 "num_base_bdevs": 4, 00:17:00.899 "num_base_bdevs_discovered": 4, 00:17:00.899 "num_base_bdevs_operational": 4, 00:17:00.899 "process": { 00:17:00.899 "type": "rebuild", 00:17:00.899 "target": "spare", 00:17:00.899 "progress": { 00:17:00.899 "blocks": 128640, 00:17:00.899 "percent": 67 00:17:00.899 } 00:17:00.899 }, 00:17:00.899 "base_bdevs_list": [ 00:17:00.899 { 00:17:00.899 "name": "spare", 00:17:00.899 "uuid": "54edf607-3f40-5153-93ce-27329de11cc3", 00:17:00.899 "is_configured": true, 00:17:00.899 "data_offset": 2048, 00:17:00.899 "data_size": 63488 00:17:00.899 }, 00:17:00.899 { 00:17:00.899 "name": "BaseBdev2", 00:17:00.899 "uuid": "516864e0-1e37-5afe-a1d1-1d30874f4381", 00:17:00.899 "is_configured": true, 00:17:00.899 "data_offset": 2048, 00:17:00.899 "data_size": 63488 00:17:00.899 }, 00:17:00.899 { 00:17:00.899 "name": "BaseBdev3", 00:17:00.899 "uuid": "864d8b3e-c25e-57c8-870c-db01ab86bcc0", 00:17:00.899 "is_configured": true, 00:17:00.899 "data_offset": 2048, 00:17:00.899 "data_size": 63488 00:17:00.899 }, 00:17:00.899 { 00:17:00.899 "name": "BaseBdev4", 00:17:00.899 "uuid": "1081f5c8-44c2-5fa7-946f-6801c6a7a795", 00:17:00.899 "is_configured": true, 00:17:00.899 "data_offset": 2048, 00:17:00.899 "data_size": 63488 00:17:00.899 } 00:17:00.899 ] 00:17:00.899 }' 00:17:00.899 14:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:00.899 14:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:00.899 14:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:00.899 14:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:00.899 14:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:02.279 14:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:02.280 14:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:02.280 14:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:02.280 14:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:02.280 14:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:02.280 14:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:02.280 14:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.280 14:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.280 14:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.280 14:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.280 14:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.280 14:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.280 "name": "raid_bdev1", 00:17:02.280 "uuid": "efd502af-95ac-4e71-bcbd-b88a5ae8d8e2", 00:17:02.280 "strip_size_kb": 64, 00:17:02.280 "state": "online", 00:17:02.280 "raid_level": "raid5f", 00:17:02.280 "superblock": true, 00:17:02.280 "num_base_bdevs": 4, 00:17:02.280 "num_base_bdevs_discovered": 4, 00:17:02.280 "num_base_bdevs_operational": 4, 00:17:02.280 "process": { 00:17:02.280 "type": "rebuild", 00:17:02.280 "target": "spare", 00:17:02.280 "progress": { 00:17:02.280 "blocks": 151680, 00:17:02.280 "percent": 79 00:17:02.280 } 00:17:02.280 }, 00:17:02.280 "base_bdevs_list": [ 00:17:02.280 { 00:17:02.280 "name": "spare", 00:17:02.280 "uuid": "54edf607-3f40-5153-93ce-27329de11cc3", 00:17:02.280 "is_configured": true, 00:17:02.280 "data_offset": 2048, 00:17:02.280 "data_size": 63488 00:17:02.280 }, 00:17:02.280 { 00:17:02.280 "name": "BaseBdev2", 00:17:02.280 "uuid": "516864e0-1e37-5afe-a1d1-1d30874f4381", 00:17:02.280 "is_configured": true, 00:17:02.280 "data_offset": 2048, 00:17:02.280 "data_size": 63488 00:17:02.280 }, 00:17:02.280 { 00:17:02.280 "name": "BaseBdev3", 00:17:02.280 "uuid": "864d8b3e-c25e-57c8-870c-db01ab86bcc0", 00:17:02.280 "is_configured": true, 00:17:02.280 "data_offset": 2048, 00:17:02.280 "data_size": 63488 00:17:02.280 }, 00:17:02.280 { 00:17:02.280 "name": "BaseBdev4", 00:17:02.280 "uuid": "1081f5c8-44c2-5fa7-946f-6801c6a7a795", 00:17:02.280 "is_configured": true, 00:17:02.280 "data_offset": 2048, 00:17:02.280 "data_size": 63488 00:17:02.280 } 00:17:02.280 ] 00:17:02.280 }' 00:17:02.280 14:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.280 14:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:02.280 14:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.280 14:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:02.280 14:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:03.218 14:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:03.218 14:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:03.218 14:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:03.218 14:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:03.218 14:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:03.218 14:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:03.218 14:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.218 14:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.218 14:49:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.218 14:49:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.218 14:49:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.218 14:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:03.218 "name": "raid_bdev1", 00:17:03.218 "uuid": "efd502af-95ac-4e71-bcbd-b88a5ae8d8e2", 00:17:03.218 "strip_size_kb": 64, 00:17:03.218 "state": "online", 00:17:03.218 "raid_level": "raid5f", 00:17:03.218 "superblock": true, 00:17:03.218 "num_base_bdevs": 4, 00:17:03.218 "num_base_bdevs_discovered": 4, 00:17:03.218 "num_base_bdevs_operational": 4, 00:17:03.218 "process": { 00:17:03.218 "type": "rebuild", 00:17:03.218 "target": "spare", 00:17:03.218 "progress": { 00:17:03.218 "blocks": 172800, 00:17:03.218 "percent": 90 00:17:03.218 } 00:17:03.218 }, 00:17:03.218 "base_bdevs_list": [ 00:17:03.218 { 00:17:03.218 "name": "spare", 00:17:03.218 "uuid": "54edf607-3f40-5153-93ce-27329de11cc3", 00:17:03.218 "is_configured": true, 00:17:03.218 "data_offset": 2048, 00:17:03.218 "data_size": 63488 00:17:03.218 }, 00:17:03.218 { 00:17:03.218 "name": "BaseBdev2", 00:17:03.218 "uuid": "516864e0-1e37-5afe-a1d1-1d30874f4381", 00:17:03.218 "is_configured": true, 00:17:03.218 "data_offset": 2048, 00:17:03.218 "data_size": 63488 00:17:03.218 }, 00:17:03.218 { 00:17:03.218 "name": "BaseBdev3", 00:17:03.218 "uuid": "864d8b3e-c25e-57c8-870c-db01ab86bcc0", 00:17:03.218 "is_configured": true, 00:17:03.218 "data_offset": 2048, 00:17:03.218 "data_size": 63488 00:17:03.218 }, 00:17:03.218 { 00:17:03.218 "name": "BaseBdev4", 00:17:03.218 "uuid": "1081f5c8-44c2-5fa7-946f-6801c6a7a795", 00:17:03.218 "is_configured": true, 00:17:03.218 "data_offset": 2048, 00:17:03.218 "data_size": 63488 00:17:03.218 } 00:17:03.218 ] 00:17:03.218 }' 00:17:03.218 14:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:03.218 14:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:03.218 14:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:03.218 14:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:03.218 14:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:04.155 [2024-12-09 14:49:42.139433] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:04.155 [2024-12-09 14:49:42.139511] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:04.155 [2024-12-09 14:49:42.139667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:04.414 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:04.414 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:04.414 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.414 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:04.414 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:04.414 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.414 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.414 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.414 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.414 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.414 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.414 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.414 "name": "raid_bdev1", 00:17:04.414 "uuid": "efd502af-95ac-4e71-bcbd-b88a5ae8d8e2", 00:17:04.414 "strip_size_kb": 64, 00:17:04.414 "state": "online", 00:17:04.414 "raid_level": "raid5f", 00:17:04.414 "superblock": true, 00:17:04.414 "num_base_bdevs": 4, 00:17:04.414 "num_base_bdevs_discovered": 4, 00:17:04.414 "num_base_bdevs_operational": 4, 00:17:04.414 "base_bdevs_list": [ 00:17:04.414 { 00:17:04.414 "name": "spare", 00:17:04.414 "uuid": "54edf607-3f40-5153-93ce-27329de11cc3", 00:17:04.414 "is_configured": true, 00:17:04.414 "data_offset": 2048, 00:17:04.414 "data_size": 63488 00:17:04.414 }, 00:17:04.414 { 00:17:04.414 "name": "BaseBdev2", 00:17:04.415 "uuid": "516864e0-1e37-5afe-a1d1-1d30874f4381", 00:17:04.415 "is_configured": true, 00:17:04.415 "data_offset": 2048, 00:17:04.415 "data_size": 63488 00:17:04.415 }, 00:17:04.415 { 00:17:04.415 "name": "BaseBdev3", 00:17:04.415 "uuid": "864d8b3e-c25e-57c8-870c-db01ab86bcc0", 00:17:04.415 "is_configured": true, 00:17:04.415 "data_offset": 2048, 00:17:04.415 "data_size": 63488 00:17:04.415 }, 00:17:04.415 { 00:17:04.415 "name": "BaseBdev4", 00:17:04.415 "uuid": "1081f5c8-44c2-5fa7-946f-6801c6a7a795", 00:17:04.415 "is_configured": true, 00:17:04.415 "data_offset": 2048, 00:17:04.415 "data_size": 63488 00:17:04.415 } 00:17:04.415 ] 00:17:04.415 }' 00:17:04.415 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.415 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:04.415 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.415 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:04.415 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:04.415 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:04.415 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.415 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:04.415 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:04.415 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.415 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.415 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.415 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.415 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.415 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.415 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.415 "name": "raid_bdev1", 00:17:04.415 "uuid": "efd502af-95ac-4e71-bcbd-b88a5ae8d8e2", 00:17:04.415 "strip_size_kb": 64, 00:17:04.415 "state": "online", 00:17:04.415 "raid_level": "raid5f", 00:17:04.415 "superblock": true, 00:17:04.415 "num_base_bdevs": 4, 00:17:04.415 "num_base_bdevs_discovered": 4, 00:17:04.415 "num_base_bdevs_operational": 4, 00:17:04.415 "base_bdevs_list": [ 00:17:04.415 { 00:17:04.415 "name": "spare", 00:17:04.415 "uuid": "54edf607-3f40-5153-93ce-27329de11cc3", 00:17:04.415 "is_configured": true, 00:17:04.415 "data_offset": 2048, 00:17:04.415 "data_size": 63488 00:17:04.415 }, 00:17:04.415 { 00:17:04.415 "name": "BaseBdev2", 00:17:04.415 "uuid": "516864e0-1e37-5afe-a1d1-1d30874f4381", 00:17:04.415 "is_configured": true, 00:17:04.415 "data_offset": 2048, 00:17:04.415 "data_size": 63488 00:17:04.415 }, 00:17:04.415 { 00:17:04.415 "name": "BaseBdev3", 00:17:04.415 "uuid": "864d8b3e-c25e-57c8-870c-db01ab86bcc0", 00:17:04.415 "is_configured": true, 00:17:04.415 "data_offset": 2048, 00:17:04.415 "data_size": 63488 00:17:04.415 }, 00:17:04.415 { 00:17:04.415 "name": "BaseBdev4", 00:17:04.415 "uuid": "1081f5c8-44c2-5fa7-946f-6801c6a7a795", 00:17:04.415 "is_configured": true, 00:17:04.415 "data_offset": 2048, 00:17:04.415 "data_size": 63488 00:17:04.415 } 00:17:04.415 ] 00:17:04.415 }' 00:17:04.415 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.675 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:04.675 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.675 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:04.675 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:04.675 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:04.675 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.675 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:04.675 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:04.675 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:04.675 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.675 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.675 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.675 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.675 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.675 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.675 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.675 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.675 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.675 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.675 "name": "raid_bdev1", 00:17:04.675 "uuid": "efd502af-95ac-4e71-bcbd-b88a5ae8d8e2", 00:17:04.675 "strip_size_kb": 64, 00:17:04.675 "state": "online", 00:17:04.675 "raid_level": "raid5f", 00:17:04.675 "superblock": true, 00:17:04.675 "num_base_bdevs": 4, 00:17:04.675 "num_base_bdevs_discovered": 4, 00:17:04.675 "num_base_bdevs_operational": 4, 00:17:04.675 "base_bdevs_list": [ 00:17:04.675 { 00:17:04.675 "name": "spare", 00:17:04.675 "uuid": "54edf607-3f40-5153-93ce-27329de11cc3", 00:17:04.675 "is_configured": true, 00:17:04.675 "data_offset": 2048, 00:17:04.675 "data_size": 63488 00:17:04.675 }, 00:17:04.675 { 00:17:04.675 "name": "BaseBdev2", 00:17:04.675 "uuid": "516864e0-1e37-5afe-a1d1-1d30874f4381", 00:17:04.675 "is_configured": true, 00:17:04.675 "data_offset": 2048, 00:17:04.675 "data_size": 63488 00:17:04.675 }, 00:17:04.675 { 00:17:04.675 "name": "BaseBdev3", 00:17:04.675 "uuid": "864d8b3e-c25e-57c8-870c-db01ab86bcc0", 00:17:04.675 "is_configured": true, 00:17:04.675 "data_offset": 2048, 00:17:04.675 "data_size": 63488 00:17:04.675 }, 00:17:04.675 { 00:17:04.675 "name": "BaseBdev4", 00:17:04.675 "uuid": "1081f5c8-44c2-5fa7-946f-6801c6a7a795", 00:17:04.675 "is_configured": true, 00:17:04.675 "data_offset": 2048, 00:17:04.675 "data_size": 63488 00:17:04.675 } 00:17:04.675 ] 00:17:04.675 }' 00:17:04.675 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.675 14:49:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.934 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:04.934 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.934 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.934 [2024-12-09 14:49:43.021718] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:04.934 [2024-12-09 14:49:43.021751] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:04.934 [2024-12-09 14:49:43.021831] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:04.934 [2024-12-09 14:49:43.021924] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:04.934 [2024-12-09 14:49:43.021945] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:04.934 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.934 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.934 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.934 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.934 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:04.934 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.194 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:05.194 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:05.194 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:05.194 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:05.194 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:05.194 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:05.194 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:05.194 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:05.194 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:05.194 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:05.194 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:05.194 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:05.194 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:05.194 /dev/nbd0 00:17:05.194 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:05.454 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:05.454 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:05.454 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:05.454 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:05.454 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:05.454 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:05.454 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:05.454 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:05.454 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:05.454 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:05.454 1+0 records in 00:17:05.454 1+0 records out 00:17:05.454 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000428978 s, 9.5 MB/s 00:17:05.454 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.454 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:05.454 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.454 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:05.454 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:05.454 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:05.454 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:05.454 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:05.454 /dev/nbd1 00:17:05.454 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:05.454 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:05.454 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:05.454 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:05.454 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:05.454 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:05.454 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:05.715 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:05.715 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:05.715 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:05.715 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:05.715 1+0 records in 00:17:05.715 1+0 records out 00:17:05.715 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0005708 s, 7.2 MB/s 00:17:05.715 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.715 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:05.715 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.715 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:05.715 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:05.715 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:05.715 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:05.715 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:05.715 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:05.715 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:05.715 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:05.715 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:05.715 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:05.715 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:05.715 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:05.975 14:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:05.975 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:05.975 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:05.975 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:05.975 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:05.975 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:05.975 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:05.975 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:05.975 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:05.975 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:06.234 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:06.234 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:06.234 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:06.235 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:06.235 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:06.235 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:06.235 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:06.235 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:06.235 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:06.235 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:06.235 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.235 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.235 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.235 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:06.235 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.235 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.235 [2024-12-09 14:49:44.260210] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:06.235 [2024-12-09 14:49:44.260290] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.235 [2024-12-09 14:49:44.260314] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:06.235 [2024-12-09 14:49:44.260325] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.235 [2024-12-09 14:49:44.262908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.235 [2024-12-09 14:49:44.262949] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:06.235 [2024-12-09 14:49:44.263058] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:06.235 [2024-12-09 14:49:44.263124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:06.235 [2024-12-09 14:49:44.263319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:06.235 [2024-12-09 14:49:44.263419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:06.235 [2024-12-09 14:49:44.263506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:06.235 spare 00:17:06.235 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.235 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:06.235 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.235 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.495 [2024-12-09 14:49:44.363457] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:06.495 [2024-12-09 14:49:44.363513] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:06.495 [2024-12-09 14:49:44.363897] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:17:06.495 [2024-12-09 14:49:44.372056] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:06.495 [2024-12-09 14:49:44.372132] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:06.495 [2024-12-09 14:49:44.372425] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:06.495 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.495 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:06.495 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.495 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.495 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:06.495 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.495 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:06.495 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.495 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.495 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.495 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.495 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.495 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.495 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.495 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.495 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.495 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.495 "name": "raid_bdev1", 00:17:06.495 "uuid": "efd502af-95ac-4e71-bcbd-b88a5ae8d8e2", 00:17:06.495 "strip_size_kb": 64, 00:17:06.495 "state": "online", 00:17:06.495 "raid_level": "raid5f", 00:17:06.495 "superblock": true, 00:17:06.495 "num_base_bdevs": 4, 00:17:06.495 "num_base_bdevs_discovered": 4, 00:17:06.495 "num_base_bdevs_operational": 4, 00:17:06.495 "base_bdevs_list": [ 00:17:06.495 { 00:17:06.495 "name": "spare", 00:17:06.495 "uuid": "54edf607-3f40-5153-93ce-27329de11cc3", 00:17:06.495 "is_configured": true, 00:17:06.495 "data_offset": 2048, 00:17:06.495 "data_size": 63488 00:17:06.495 }, 00:17:06.495 { 00:17:06.495 "name": "BaseBdev2", 00:17:06.495 "uuid": "516864e0-1e37-5afe-a1d1-1d30874f4381", 00:17:06.495 "is_configured": true, 00:17:06.495 "data_offset": 2048, 00:17:06.495 "data_size": 63488 00:17:06.495 }, 00:17:06.495 { 00:17:06.495 "name": "BaseBdev3", 00:17:06.495 "uuid": "864d8b3e-c25e-57c8-870c-db01ab86bcc0", 00:17:06.495 "is_configured": true, 00:17:06.495 "data_offset": 2048, 00:17:06.495 "data_size": 63488 00:17:06.495 }, 00:17:06.495 { 00:17:06.495 "name": "BaseBdev4", 00:17:06.495 "uuid": "1081f5c8-44c2-5fa7-946f-6801c6a7a795", 00:17:06.495 "is_configured": true, 00:17:06.495 "data_offset": 2048, 00:17:06.495 "data_size": 63488 00:17:06.496 } 00:17:06.496 ] 00:17:06.496 }' 00:17:06.496 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.496 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.755 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:06.755 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.755 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:06.755 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:06.755 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.755 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.755 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.755 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.755 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.755 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.015 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:07.015 "name": "raid_bdev1", 00:17:07.015 "uuid": "efd502af-95ac-4e71-bcbd-b88a5ae8d8e2", 00:17:07.015 "strip_size_kb": 64, 00:17:07.015 "state": "online", 00:17:07.015 "raid_level": "raid5f", 00:17:07.015 "superblock": true, 00:17:07.015 "num_base_bdevs": 4, 00:17:07.015 "num_base_bdevs_discovered": 4, 00:17:07.015 "num_base_bdevs_operational": 4, 00:17:07.015 "base_bdevs_list": [ 00:17:07.015 { 00:17:07.015 "name": "spare", 00:17:07.015 "uuid": "54edf607-3f40-5153-93ce-27329de11cc3", 00:17:07.015 "is_configured": true, 00:17:07.015 "data_offset": 2048, 00:17:07.015 "data_size": 63488 00:17:07.015 }, 00:17:07.015 { 00:17:07.015 "name": "BaseBdev2", 00:17:07.015 "uuid": "516864e0-1e37-5afe-a1d1-1d30874f4381", 00:17:07.015 "is_configured": true, 00:17:07.015 "data_offset": 2048, 00:17:07.015 "data_size": 63488 00:17:07.015 }, 00:17:07.015 { 00:17:07.015 "name": "BaseBdev3", 00:17:07.015 "uuid": "864d8b3e-c25e-57c8-870c-db01ab86bcc0", 00:17:07.015 "is_configured": true, 00:17:07.015 "data_offset": 2048, 00:17:07.015 "data_size": 63488 00:17:07.015 }, 00:17:07.015 { 00:17:07.015 "name": "BaseBdev4", 00:17:07.015 "uuid": "1081f5c8-44c2-5fa7-946f-6801c6a7a795", 00:17:07.015 "is_configured": true, 00:17:07.015 "data_offset": 2048, 00:17:07.015 "data_size": 63488 00:17:07.015 } 00:17:07.015 ] 00:17:07.015 }' 00:17:07.015 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:07.015 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:07.015 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:07.015 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:07.015 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.015 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:07.015 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.015 14:49:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.015 14:49:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.015 14:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:07.015 14:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:07.015 14:49:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.015 14:49:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.015 [2024-12-09 14:49:45.036317] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:07.015 14:49:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.015 14:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:07.015 14:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.015 14:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.015 14:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:07.015 14:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:07.015 14:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:07.015 14:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.015 14:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.015 14:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.015 14:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.015 14:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.015 14:49:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.015 14:49:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.015 14:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.015 14:49:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.015 14:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.015 "name": "raid_bdev1", 00:17:07.015 "uuid": "efd502af-95ac-4e71-bcbd-b88a5ae8d8e2", 00:17:07.015 "strip_size_kb": 64, 00:17:07.015 "state": "online", 00:17:07.015 "raid_level": "raid5f", 00:17:07.015 "superblock": true, 00:17:07.015 "num_base_bdevs": 4, 00:17:07.015 "num_base_bdevs_discovered": 3, 00:17:07.015 "num_base_bdevs_operational": 3, 00:17:07.015 "base_bdevs_list": [ 00:17:07.015 { 00:17:07.015 "name": null, 00:17:07.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.015 "is_configured": false, 00:17:07.015 "data_offset": 0, 00:17:07.015 "data_size": 63488 00:17:07.015 }, 00:17:07.015 { 00:17:07.015 "name": "BaseBdev2", 00:17:07.015 "uuid": "516864e0-1e37-5afe-a1d1-1d30874f4381", 00:17:07.015 "is_configured": true, 00:17:07.015 "data_offset": 2048, 00:17:07.015 "data_size": 63488 00:17:07.015 }, 00:17:07.015 { 00:17:07.015 "name": "BaseBdev3", 00:17:07.015 "uuid": "864d8b3e-c25e-57c8-870c-db01ab86bcc0", 00:17:07.015 "is_configured": true, 00:17:07.015 "data_offset": 2048, 00:17:07.015 "data_size": 63488 00:17:07.015 }, 00:17:07.015 { 00:17:07.015 "name": "BaseBdev4", 00:17:07.015 "uuid": "1081f5c8-44c2-5fa7-946f-6801c6a7a795", 00:17:07.015 "is_configured": true, 00:17:07.015 "data_offset": 2048, 00:17:07.015 "data_size": 63488 00:17:07.015 } 00:17:07.015 ] 00:17:07.015 }' 00:17:07.015 14:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.015 14:49:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.611 14:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:07.611 14:49:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.611 14:49:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.611 [2024-12-09 14:49:45.447701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:07.611 [2024-12-09 14:49:45.447979] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:07.611 [2024-12-09 14:49:45.448064] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:07.611 [2024-12-09 14:49:45.448147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:07.611 [2024-12-09 14:49:45.464921] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:17:07.611 14:49:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.611 14:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:07.611 [2024-12-09 14:49:45.475736] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:08.548 14:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:08.548 14:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.548 14:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:08.549 14:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:08.549 14:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.549 14:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.549 14:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.549 14:49:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.549 14:49:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.549 14:49:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.549 14:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.549 "name": "raid_bdev1", 00:17:08.549 "uuid": "efd502af-95ac-4e71-bcbd-b88a5ae8d8e2", 00:17:08.549 "strip_size_kb": 64, 00:17:08.549 "state": "online", 00:17:08.549 "raid_level": "raid5f", 00:17:08.549 "superblock": true, 00:17:08.549 "num_base_bdevs": 4, 00:17:08.549 "num_base_bdevs_discovered": 4, 00:17:08.549 "num_base_bdevs_operational": 4, 00:17:08.549 "process": { 00:17:08.549 "type": "rebuild", 00:17:08.549 "target": "spare", 00:17:08.549 "progress": { 00:17:08.549 "blocks": 19200, 00:17:08.549 "percent": 10 00:17:08.549 } 00:17:08.549 }, 00:17:08.549 "base_bdevs_list": [ 00:17:08.549 { 00:17:08.549 "name": "spare", 00:17:08.549 "uuid": "54edf607-3f40-5153-93ce-27329de11cc3", 00:17:08.549 "is_configured": true, 00:17:08.549 "data_offset": 2048, 00:17:08.549 "data_size": 63488 00:17:08.549 }, 00:17:08.549 { 00:17:08.549 "name": "BaseBdev2", 00:17:08.549 "uuid": "516864e0-1e37-5afe-a1d1-1d30874f4381", 00:17:08.549 "is_configured": true, 00:17:08.549 "data_offset": 2048, 00:17:08.549 "data_size": 63488 00:17:08.549 }, 00:17:08.549 { 00:17:08.549 "name": "BaseBdev3", 00:17:08.549 "uuid": "864d8b3e-c25e-57c8-870c-db01ab86bcc0", 00:17:08.549 "is_configured": true, 00:17:08.549 "data_offset": 2048, 00:17:08.549 "data_size": 63488 00:17:08.549 }, 00:17:08.549 { 00:17:08.549 "name": "BaseBdev4", 00:17:08.549 "uuid": "1081f5c8-44c2-5fa7-946f-6801c6a7a795", 00:17:08.549 "is_configured": true, 00:17:08.549 "data_offset": 2048, 00:17:08.549 "data_size": 63488 00:17:08.549 } 00:17:08.549 ] 00:17:08.549 }' 00:17:08.549 14:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.549 14:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:08.549 14:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.549 14:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:08.549 14:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:08.549 14:49:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.549 14:49:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.549 [2024-12-09 14:49:46.614520] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:08.808 [2024-12-09 14:49:46.684407] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:08.808 [2024-12-09 14:49:46.684510] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.808 [2024-12-09 14:49:46.684532] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:08.808 [2024-12-09 14:49:46.684543] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:08.808 14:49:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.808 14:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:08.808 14:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.808 14:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.808 14:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:08.808 14:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:08.808 14:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:08.808 14:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.808 14:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.808 14:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.808 14:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.808 14:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.808 14:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.808 14:49:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.808 14:49:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.808 14:49:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.808 14:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.808 "name": "raid_bdev1", 00:17:08.808 "uuid": "efd502af-95ac-4e71-bcbd-b88a5ae8d8e2", 00:17:08.808 "strip_size_kb": 64, 00:17:08.808 "state": "online", 00:17:08.808 "raid_level": "raid5f", 00:17:08.808 "superblock": true, 00:17:08.808 "num_base_bdevs": 4, 00:17:08.808 "num_base_bdevs_discovered": 3, 00:17:08.808 "num_base_bdevs_operational": 3, 00:17:08.808 "base_bdevs_list": [ 00:17:08.808 { 00:17:08.808 "name": null, 00:17:08.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.808 "is_configured": false, 00:17:08.808 "data_offset": 0, 00:17:08.808 "data_size": 63488 00:17:08.808 }, 00:17:08.808 { 00:17:08.808 "name": "BaseBdev2", 00:17:08.808 "uuid": "516864e0-1e37-5afe-a1d1-1d30874f4381", 00:17:08.808 "is_configured": true, 00:17:08.808 "data_offset": 2048, 00:17:08.808 "data_size": 63488 00:17:08.808 }, 00:17:08.808 { 00:17:08.808 "name": "BaseBdev3", 00:17:08.808 "uuid": "864d8b3e-c25e-57c8-870c-db01ab86bcc0", 00:17:08.808 "is_configured": true, 00:17:08.808 "data_offset": 2048, 00:17:08.808 "data_size": 63488 00:17:08.808 }, 00:17:08.808 { 00:17:08.808 "name": "BaseBdev4", 00:17:08.808 "uuid": "1081f5c8-44c2-5fa7-946f-6801c6a7a795", 00:17:08.808 "is_configured": true, 00:17:08.808 "data_offset": 2048, 00:17:08.808 "data_size": 63488 00:17:08.808 } 00:17:08.808 ] 00:17:08.808 }' 00:17:08.808 14:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.808 14:49:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.068 14:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:09.068 14:49:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.068 14:49:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.068 [2024-12-09 14:49:47.148359] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:09.068 [2024-12-09 14:49:47.148514] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.068 [2024-12-09 14:49:47.148586] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:17:09.068 [2024-12-09 14:49:47.148635] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.068 [2024-12-09 14:49:47.149262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.068 [2024-12-09 14:49:47.149339] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:09.068 [2024-12-09 14:49:47.149506] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:09.068 [2024-12-09 14:49:47.149567] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:09.068 [2024-12-09 14:49:47.149640] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:09.068 [2024-12-09 14:49:47.149722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:09.068 [2024-12-09 14:49:47.166435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:17:09.068 spare 00:17:09.068 14:49:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.068 14:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:09.068 [2024-12-09 14:49:47.175983] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:10.447 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:10.447 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.447 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:10.447 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:10.447 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.447 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.447 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.447 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.447 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.447 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.447 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.447 "name": "raid_bdev1", 00:17:10.447 "uuid": "efd502af-95ac-4e71-bcbd-b88a5ae8d8e2", 00:17:10.447 "strip_size_kb": 64, 00:17:10.447 "state": "online", 00:17:10.447 "raid_level": "raid5f", 00:17:10.447 "superblock": true, 00:17:10.447 "num_base_bdevs": 4, 00:17:10.447 "num_base_bdevs_discovered": 4, 00:17:10.447 "num_base_bdevs_operational": 4, 00:17:10.447 "process": { 00:17:10.447 "type": "rebuild", 00:17:10.447 "target": "spare", 00:17:10.447 "progress": { 00:17:10.447 "blocks": 19200, 00:17:10.447 "percent": 10 00:17:10.447 } 00:17:10.447 }, 00:17:10.447 "base_bdevs_list": [ 00:17:10.447 { 00:17:10.447 "name": "spare", 00:17:10.447 "uuid": "54edf607-3f40-5153-93ce-27329de11cc3", 00:17:10.447 "is_configured": true, 00:17:10.447 "data_offset": 2048, 00:17:10.447 "data_size": 63488 00:17:10.447 }, 00:17:10.447 { 00:17:10.447 "name": "BaseBdev2", 00:17:10.447 "uuid": "516864e0-1e37-5afe-a1d1-1d30874f4381", 00:17:10.447 "is_configured": true, 00:17:10.447 "data_offset": 2048, 00:17:10.447 "data_size": 63488 00:17:10.447 }, 00:17:10.447 { 00:17:10.447 "name": "BaseBdev3", 00:17:10.447 "uuid": "864d8b3e-c25e-57c8-870c-db01ab86bcc0", 00:17:10.447 "is_configured": true, 00:17:10.447 "data_offset": 2048, 00:17:10.447 "data_size": 63488 00:17:10.447 }, 00:17:10.447 { 00:17:10.447 "name": "BaseBdev4", 00:17:10.447 "uuid": "1081f5c8-44c2-5fa7-946f-6801c6a7a795", 00:17:10.447 "is_configured": true, 00:17:10.447 "data_offset": 2048, 00:17:10.447 "data_size": 63488 00:17:10.447 } 00:17:10.447 ] 00:17:10.447 }' 00:17:10.447 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.447 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:10.447 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.447 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:10.447 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:10.447 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.447 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.447 [2024-12-09 14:49:48.330880] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:10.447 [2024-12-09 14:49:48.384579] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:10.447 [2024-12-09 14:49:48.384658] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:10.447 [2024-12-09 14:49:48.384684] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:10.447 [2024-12-09 14:49:48.384693] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:10.447 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.447 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:10.447 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:10.447 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:10.447 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:10.447 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:10.447 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:10.447 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.447 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.447 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.447 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.447 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.447 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.448 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.448 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.448 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.448 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.448 "name": "raid_bdev1", 00:17:10.448 "uuid": "efd502af-95ac-4e71-bcbd-b88a5ae8d8e2", 00:17:10.448 "strip_size_kb": 64, 00:17:10.448 "state": "online", 00:17:10.448 "raid_level": "raid5f", 00:17:10.448 "superblock": true, 00:17:10.448 "num_base_bdevs": 4, 00:17:10.448 "num_base_bdevs_discovered": 3, 00:17:10.448 "num_base_bdevs_operational": 3, 00:17:10.448 "base_bdevs_list": [ 00:17:10.448 { 00:17:10.448 "name": null, 00:17:10.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.448 "is_configured": false, 00:17:10.448 "data_offset": 0, 00:17:10.448 "data_size": 63488 00:17:10.448 }, 00:17:10.448 { 00:17:10.448 "name": "BaseBdev2", 00:17:10.448 "uuid": "516864e0-1e37-5afe-a1d1-1d30874f4381", 00:17:10.448 "is_configured": true, 00:17:10.448 "data_offset": 2048, 00:17:10.448 "data_size": 63488 00:17:10.448 }, 00:17:10.448 { 00:17:10.448 "name": "BaseBdev3", 00:17:10.448 "uuid": "864d8b3e-c25e-57c8-870c-db01ab86bcc0", 00:17:10.448 "is_configured": true, 00:17:10.448 "data_offset": 2048, 00:17:10.448 "data_size": 63488 00:17:10.448 }, 00:17:10.448 { 00:17:10.448 "name": "BaseBdev4", 00:17:10.448 "uuid": "1081f5c8-44c2-5fa7-946f-6801c6a7a795", 00:17:10.448 "is_configured": true, 00:17:10.448 "data_offset": 2048, 00:17:10.448 "data_size": 63488 00:17:10.448 } 00:17:10.448 ] 00:17:10.448 }' 00:17:10.448 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.448 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.017 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:11.017 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.017 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:11.017 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:11.017 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.017 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.017 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.017 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.017 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.017 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.017 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.017 "name": "raid_bdev1", 00:17:11.017 "uuid": "efd502af-95ac-4e71-bcbd-b88a5ae8d8e2", 00:17:11.017 "strip_size_kb": 64, 00:17:11.017 "state": "online", 00:17:11.017 "raid_level": "raid5f", 00:17:11.017 "superblock": true, 00:17:11.017 "num_base_bdevs": 4, 00:17:11.017 "num_base_bdevs_discovered": 3, 00:17:11.017 "num_base_bdevs_operational": 3, 00:17:11.017 "base_bdevs_list": [ 00:17:11.017 { 00:17:11.017 "name": null, 00:17:11.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.017 "is_configured": false, 00:17:11.017 "data_offset": 0, 00:17:11.017 "data_size": 63488 00:17:11.017 }, 00:17:11.017 { 00:17:11.017 "name": "BaseBdev2", 00:17:11.017 "uuid": "516864e0-1e37-5afe-a1d1-1d30874f4381", 00:17:11.017 "is_configured": true, 00:17:11.017 "data_offset": 2048, 00:17:11.017 "data_size": 63488 00:17:11.017 }, 00:17:11.017 { 00:17:11.017 "name": "BaseBdev3", 00:17:11.017 "uuid": "864d8b3e-c25e-57c8-870c-db01ab86bcc0", 00:17:11.017 "is_configured": true, 00:17:11.017 "data_offset": 2048, 00:17:11.017 "data_size": 63488 00:17:11.017 }, 00:17:11.017 { 00:17:11.017 "name": "BaseBdev4", 00:17:11.017 "uuid": "1081f5c8-44c2-5fa7-946f-6801c6a7a795", 00:17:11.017 "is_configured": true, 00:17:11.017 "data_offset": 2048, 00:17:11.017 "data_size": 63488 00:17:11.017 } 00:17:11.017 ] 00:17:11.017 }' 00:17:11.017 14:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.017 14:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:11.017 14:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.017 14:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:11.017 14:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:11.017 14:49:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.017 14:49:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.017 14:49:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.017 14:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:11.017 14:49:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.017 14:49:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.017 [2024-12-09 14:49:49.074185] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:11.017 [2024-12-09 14:49:49.074252] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.017 [2024-12-09 14:49:49.074276] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:17:11.017 [2024-12-09 14:49:49.074286] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.017 [2024-12-09 14:49:49.074792] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.018 [2024-12-09 14:49:49.074818] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:11.018 [2024-12-09 14:49:49.074924] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:11.018 [2024-12-09 14:49:49.074940] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:11.018 [2024-12-09 14:49:49.074953] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:11.018 [2024-12-09 14:49:49.074965] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:11.018 BaseBdev1 00:17:11.018 14:49:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.018 14:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:12.396 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:12.396 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:12.396 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:12.396 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:12.396 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:12.396 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:12.396 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.396 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.396 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.396 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.396 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.396 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.396 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.396 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.396 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.396 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.396 "name": "raid_bdev1", 00:17:12.396 "uuid": "efd502af-95ac-4e71-bcbd-b88a5ae8d8e2", 00:17:12.396 "strip_size_kb": 64, 00:17:12.396 "state": "online", 00:17:12.396 "raid_level": "raid5f", 00:17:12.396 "superblock": true, 00:17:12.396 "num_base_bdevs": 4, 00:17:12.396 "num_base_bdevs_discovered": 3, 00:17:12.396 "num_base_bdevs_operational": 3, 00:17:12.396 "base_bdevs_list": [ 00:17:12.396 { 00:17:12.396 "name": null, 00:17:12.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.396 "is_configured": false, 00:17:12.396 "data_offset": 0, 00:17:12.396 "data_size": 63488 00:17:12.396 }, 00:17:12.396 { 00:17:12.396 "name": "BaseBdev2", 00:17:12.396 "uuid": "516864e0-1e37-5afe-a1d1-1d30874f4381", 00:17:12.396 "is_configured": true, 00:17:12.396 "data_offset": 2048, 00:17:12.396 "data_size": 63488 00:17:12.396 }, 00:17:12.396 { 00:17:12.396 "name": "BaseBdev3", 00:17:12.396 "uuid": "864d8b3e-c25e-57c8-870c-db01ab86bcc0", 00:17:12.396 "is_configured": true, 00:17:12.396 "data_offset": 2048, 00:17:12.396 "data_size": 63488 00:17:12.396 }, 00:17:12.396 { 00:17:12.396 "name": "BaseBdev4", 00:17:12.396 "uuid": "1081f5c8-44c2-5fa7-946f-6801c6a7a795", 00:17:12.396 "is_configured": true, 00:17:12.396 "data_offset": 2048, 00:17:12.396 "data_size": 63488 00:17:12.396 } 00:17:12.396 ] 00:17:12.396 }' 00:17:12.396 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.396 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.657 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:12.657 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.657 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:12.657 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:12.657 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.657 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.657 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.657 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.657 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.657 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.657 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.657 "name": "raid_bdev1", 00:17:12.657 "uuid": "efd502af-95ac-4e71-bcbd-b88a5ae8d8e2", 00:17:12.657 "strip_size_kb": 64, 00:17:12.657 "state": "online", 00:17:12.657 "raid_level": "raid5f", 00:17:12.657 "superblock": true, 00:17:12.657 "num_base_bdevs": 4, 00:17:12.657 "num_base_bdevs_discovered": 3, 00:17:12.657 "num_base_bdevs_operational": 3, 00:17:12.657 "base_bdevs_list": [ 00:17:12.657 { 00:17:12.657 "name": null, 00:17:12.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.657 "is_configured": false, 00:17:12.657 "data_offset": 0, 00:17:12.657 "data_size": 63488 00:17:12.657 }, 00:17:12.657 { 00:17:12.657 "name": "BaseBdev2", 00:17:12.657 "uuid": "516864e0-1e37-5afe-a1d1-1d30874f4381", 00:17:12.657 "is_configured": true, 00:17:12.657 "data_offset": 2048, 00:17:12.657 "data_size": 63488 00:17:12.657 }, 00:17:12.657 { 00:17:12.657 "name": "BaseBdev3", 00:17:12.657 "uuid": "864d8b3e-c25e-57c8-870c-db01ab86bcc0", 00:17:12.657 "is_configured": true, 00:17:12.657 "data_offset": 2048, 00:17:12.657 "data_size": 63488 00:17:12.657 }, 00:17:12.657 { 00:17:12.657 "name": "BaseBdev4", 00:17:12.657 "uuid": "1081f5c8-44c2-5fa7-946f-6801c6a7a795", 00:17:12.657 "is_configured": true, 00:17:12.657 "data_offset": 2048, 00:17:12.657 "data_size": 63488 00:17:12.657 } 00:17:12.657 ] 00:17:12.657 }' 00:17:12.657 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.657 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:12.657 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.657 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:12.657 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:12.657 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:12.657 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:12.657 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:12.657 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:12.657 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:12.657 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:12.657 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:12.657 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.657 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.657 [2024-12-09 14:49:50.675607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:12.657 [2024-12-09 14:49:50.675849] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:12.657 [2024-12-09 14:49:50.675914] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:12.657 request: 00:17:12.657 { 00:17:12.657 "base_bdev": "BaseBdev1", 00:17:12.657 "raid_bdev": "raid_bdev1", 00:17:12.657 "method": "bdev_raid_add_base_bdev", 00:17:12.657 "req_id": 1 00:17:12.657 } 00:17:12.657 Got JSON-RPC error response 00:17:12.657 response: 00:17:12.657 { 00:17:12.657 "code": -22, 00:17:12.657 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:12.657 } 00:17:12.657 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:12.657 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:12.657 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:12.657 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:12.657 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:12.657 14:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:13.595 14:49:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:13.595 14:49:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.595 14:49:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.595 14:49:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:13.595 14:49:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:13.595 14:49:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:13.595 14:49:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.595 14:49:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.595 14:49:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.595 14:49:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.595 14:49:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.595 14:49:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.595 14:49:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.595 14:49:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.855 14:49:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.855 14:49:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.855 "name": "raid_bdev1", 00:17:13.855 "uuid": "efd502af-95ac-4e71-bcbd-b88a5ae8d8e2", 00:17:13.855 "strip_size_kb": 64, 00:17:13.855 "state": "online", 00:17:13.855 "raid_level": "raid5f", 00:17:13.855 "superblock": true, 00:17:13.855 "num_base_bdevs": 4, 00:17:13.855 "num_base_bdevs_discovered": 3, 00:17:13.855 "num_base_bdevs_operational": 3, 00:17:13.855 "base_bdevs_list": [ 00:17:13.855 { 00:17:13.855 "name": null, 00:17:13.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.855 "is_configured": false, 00:17:13.855 "data_offset": 0, 00:17:13.855 "data_size": 63488 00:17:13.855 }, 00:17:13.855 { 00:17:13.855 "name": "BaseBdev2", 00:17:13.855 "uuid": "516864e0-1e37-5afe-a1d1-1d30874f4381", 00:17:13.855 "is_configured": true, 00:17:13.855 "data_offset": 2048, 00:17:13.855 "data_size": 63488 00:17:13.855 }, 00:17:13.855 { 00:17:13.855 "name": "BaseBdev3", 00:17:13.855 "uuid": "864d8b3e-c25e-57c8-870c-db01ab86bcc0", 00:17:13.855 "is_configured": true, 00:17:13.855 "data_offset": 2048, 00:17:13.855 "data_size": 63488 00:17:13.855 }, 00:17:13.855 { 00:17:13.855 "name": "BaseBdev4", 00:17:13.855 "uuid": "1081f5c8-44c2-5fa7-946f-6801c6a7a795", 00:17:13.855 "is_configured": true, 00:17:13.855 "data_offset": 2048, 00:17:13.855 "data_size": 63488 00:17:13.855 } 00:17:13.855 ] 00:17:13.855 }' 00:17:13.855 14:49:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.855 14:49:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.114 14:49:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:14.114 14:49:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.114 14:49:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:14.114 14:49:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:14.114 14:49:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.114 14:49:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.114 14:49:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.114 14:49:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.114 14:49:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.114 14:49:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.114 14:49:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.114 "name": "raid_bdev1", 00:17:14.114 "uuid": "efd502af-95ac-4e71-bcbd-b88a5ae8d8e2", 00:17:14.114 "strip_size_kb": 64, 00:17:14.114 "state": "online", 00:17:14.114 "raid_level": "raid5f", 00:17:14.114 "superblock": true, 00:17:14.114 "num_base_bdevs": 4, 00:17:14.114 "num_base_bdevs_discovered": 3, 00:17:14.114 "num_base_bdevs_operational": 3, 00:17:14.114 "base_bdevs_list": [ 00:17:14.114 { 00:17:14.114 "name": null, 00:17:14.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.114 "is_configured": false, 00:17:14.114 "data_offset": 0, 00:17:14.114 "data_size": 63488 00:17:14.114 }, 00:17:14.114 { 00:17:14.114 "name": "BaseBdev2", 00:17:14.114 "uuid": "516864e0-1e37-5afe-a1d1-1d30874f4381", 00:17:14.114 "is_configured": true, 00:17:14.114 "data_offset": 2048, 00:17:14.114 "data_size": 63488 00:17:14.114 }, 00:17:14.114 { 00:17:14.114 "name": "BaseBdev3", 00:17:14.114 "uuid": "864d8b3e-c25e-57c8-870c-db01ab86bcc0", 00:17:14.114 "is_configured": true, 00:17:14.114 "data_offset": 2048, 00:17:14.114 "data_size": 63488 00:17:14.114 }, 00:17:14.114 { 00:17:14.114 "name": "BaseBdev4", 00:17:14.114 "uuid": "1081f5c8-44c2-5fa7-946f-6801c6a7a795", 00:17:14.114 "is_configured": true, 00:17:14.114 "data_offset": 2048, 00:17:14.114 "data_size": 63488 00:17:14.114 } 00:17:14.114 ] 00:17:14.114 }' 00:17:14.114 14:49:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.114 14:49:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:14.114 14:49:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.374 14:49:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:14.374 14:49:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 86430 00:17:14.374 14:49:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 86430 ']' 00:17:14.374 14:49:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 86430 00:17:14.374 14:49:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:14.374 14:49:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:14.374 14:49:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86430 00:17:14.374 14:49:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:14.374 14:49:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:14.374 killing process with pid 86430 00:17:14.374 14:49:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86430' 00:17:14.374 Received shutdown signal, test time was about 60.000000 seconds 00:17:14.374 00:17:14.374 Latency(us) 00:17:14.374 [2024-12-09T14:49:52.496Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.374 [2024-12-09T14:49:52.496Z] =================================================================================================================== 00:17:14.374 [2024-12-09T14:49:52.496Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:14.374 14:49:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 86430 00:17:14.374 [2024-12-09 14:49:52.291858] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:14.374 [2024-12-09 14:49:52.291997] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:14.374 14:49:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 86430 00:17:14.374 [2024-12-09 14:49:52.292084] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:14.374 [2024-12-09 14:49:52.292100] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:14.950 [2024-12-09 14:49:52.772130] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:15.902 14:49:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:15.902 00:17:15.902 real 0m26.905s 00:17:15.902 user 0m33.710s 00:17:15.902 sys 0m3.055s 00:17:15.902 14:49:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:15.902 14:49:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.902 ************************************ 00:17:15.902 END TEST raid5f_rebuild_test_sb 00:17:15.902 ************************************ 00:17:15.902 14:49:53 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:17:15.902 14:49:53 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:17:15.902 14:49:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:15.902 14:49:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:15.902 14:49:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:15.902 ************************************ 00:17:15.902 START TEST raid_state_function_test_sb_4k 00:17:15.902 ************************************ 00:17:15.902 14:49:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:15.902 14:49:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:15.902 14:49:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:15.902 14:49:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:15.902 14:49:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:15.902 14:49:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:15.902 14:49:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:15.902 14:49:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:15.902 14:49:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:15.902 14:49:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:15.902 14:49:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:15.902 14:49:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:15.902 14:49:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:15.902 14:49:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:15.902 14:49:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:15.902 14:49:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:15.902 14:49:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:15.902 14:49:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:15.902 14:49:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:15.902 14:49:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:15.902 14:49:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:15.902 14:49:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:15.902 14:49:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:15.902 14:49:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=87240 00:17:15.902 14:49:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:15.902 14:49:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87240' 00:17:15.902 Process raid pid: 87240 00:17:15.902 14:49:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 87240 00:17:15.902 14:49:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 87240 ']' 00:17:15.902 14:49:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.902 14:49:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:15.902 14:49:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.903 14:49:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:15.903 14:49:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.161 [2024-12-09 14:49:54.044971] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:17:16.161 [2024-12-09 14:49:54.045181] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:16.161 [2024-12-09 14:49:54.218259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.420 [2024-12-09 14:49:54.335158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.421 [2024-12-09 14:49:54.539342] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:16.421 [2024-12-09 14:49:54.539464] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:16.990 14:49:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:16.990 14:49:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:17:16.990 14:49:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:16.990 14:49:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.990 14:49:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.990 [2024-12-09 14:49:54.876562] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:16.990 [2024-12-09 14:49:54.876630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:16.990 [2024-12-09 14:49:54.876649] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:16.990 [2024-12-09 14:49:54.876659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:16.990 14:49:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.990 14:49:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:16.990 14:49:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:16.990 14:49:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:16.990 14:49:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:16.990 14:49:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:16.990 14:49:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:16.990 14:49:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.990 14:49:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.990 14:49:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.990 14:49:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.990 14:49:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.990 14:49:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:16.991 14:49:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.991 14:49:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.991 14:49:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.991 14:49:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.991 "name": "Existed_Raid", 00:17:16.991 "uuid": "940da011-14d9-4594-b2ed-ca1108503212", 00:17:16.991 "strip_size_kb": 0, 00:17:16.991 "state": "configuring", 00:17:16.991 "raid_level": "raid1", 00:17:16.991 "superblock": true, 00:17:16.991 "num_base_bdevs": 2, 00:17:16.991 "num_base_bdevs_discovered": 0, 00:17:16.991 "num_base_bdevs_operational": 2, 00:17:16.991 "base_bdevs_list": [ 00:17:16.991 { 00:17:16.991 "name": "BaseBdev1", 00:17:16.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.991 "is_configured": false, 00:17:16.991 "data_offset": 0, 00:17:16.991 "data_size": 0 00:17:16.991 }, 00:17:16.991 { 00:17:16.991 "name": "BaseBdev2", 00:17:16.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.991 "is_configured": false, 00:17:16.991 "data_offset": 0, 00:17:16.991 "data_size": 0 00:17:16.991 } 00:17:16.991 ] 00:17:16.991 }' 00:17:16.991 14:49:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.991 14:49:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.249 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:17.249 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.249 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.249 [2024-12-09 14:49:55.315774] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:17.249 [2024-12-09 14:49:55.315813] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:17.249 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.249 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:17.249 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.249 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.249 [2024-12-09 14:49:55.323737] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:17.249 [2024-12-09 14:49:55.323780] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:17.249 [2024-12-09 14:49:55.323790] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:17.249 [2024-12-09 14:49:55.323802] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:17.249 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.249 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:17:17.249 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.249 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.249 [2024-12-09 14:49:55.367532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:17.510 BaseBdev1 00:17:17.510 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.510 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:17.510 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:17.510 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:17.510 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:17:17.510 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:17.510 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:17.510 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:17.510 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.510 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.510 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.510 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:17.510 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.510 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.510 [ 00:17:17.510 { 00:17:17.510 "name": "BaseBdev1", 00:17:17.510 "aliases": [ 00:17:17.510 "996cae29-9be1-4c74-b5c7-7c807fd9101a" 00:17:17.510 ], 00:17:17.510 "product_name": "Malloc disk", 00:17:17.510 "block_size": 4096, 00:17:17.510 "num_blocks": 8192, 00:17:17.510 "uuid": "996cae29-9be1-4c74-b5c7-7c807fd9101a", 00:17:17.510 "assigned_rate_limits": { 00:17:17.510 "rw_ios_per_sec": 0, 00:17:17.510 "rw_mbytes_per_sec": 0, 00:17:17.510 "r_mbytes_per_sec": 0, 00:17:17.510 "w_mbytes_per_sec": 0 00:17:17.510 }, 00:17:17.510 "claimed": true, 00:17:17.510 "claim_type": "exclusive_write", 00:17:17.510 "zoned": false, 00:17:17.510 "supported_io_types": { 00:17:17.510 "read": true, 00:17:17.510 "write": true, 00:17:17.510 "unmap": true, 00:17:17.510 "flush": true, 00:17:17.510 "reset": true, 00:17:17.510 "nvme_admin": false, 00:17:17.510 "nvme_io": false, 00:17:17.510 "nvme_io_md": false, 00:17:17.510 "write_zeroes": true, 00:17:17.510 "zcopy": true, 00:17:17.510 "get_zone_info": false, 00:17:17.510 "zone_management": false, 00:17:17.510 "zone_append": false, 00:17:17.510 "compare": false, 00:17:17.510 "compare_and_write": false, 00:17:17.510 "abort": true, 00:17:17.510 "seek_hole": false, 00:17:17.510 "seek_data": false, 00:17:17.510 "copy": true, 00:17:17.510 "nvme_iov_md": false 00:17:17.510 }, 00:17:17.510 "memory_domains": [ 00:17:17.510 { 00:17:17.510 "dma_device_id": "system", 00:17:17.510 "dma_device_type": 1 00:17:17.510 }, 00:17:17.510 { 00:17:17.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.510 "dma_device_type": 2 00:17:17.510 } 00:17:17.510 ], 00:17:17.510 "driver_specific": {} 00:17:17.510 } 00:17:17.510 ] 00:17:17.510 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.510 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:17:17.510 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:17.510 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:17.510 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:17.510 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:17.510 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:17.510 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:17.510 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.510 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.510 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.510 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.510 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.510 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:17.510 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.510 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.510 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.510 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.510 "name": "Existed_Raid", 00:17:17.510 "uuid": "d4613f3d-6be2-4c20-bc80-2a092e53ea8b", 00:17:17.510 "strip_size_kb": 0, 00:17:17.510 "state": "configuring", 00:17:17.510 "raid_level": "raid1", 00:17:17.510 "superblock": true, 00:17:17.510 "num_base_bdevs": 2, 00:17:17.510 "num_base_bdevs_discovered": 1, 00:17:17.510 "num_base_bdevs_operational": 2, 00:17:17.510 "base_bdevs_list": [ 00:17:17.510 { 00:17:17.510 "name": "BaseBdev1", 00:17:17.510 "uuid": "996cae29-9be1-4c74-b5c7-7c807fd9101a", 00:17:17.510 "is_configured": true, 00:17:17.510 "data_offset": 256, 00:17:17.510 "data_size": 7936 00:17:17.510 }, 00:17:17.510 { 00:17:17.510 "name": "BaseBdev2", 00:17:17.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.510 "is_configured": false, 00:17:17.510 "data_offset": 0, 00:17:17.510 "data_size": 0 00:17:17.510 } 00:17:17.510 ] 00:17:17.510 }' 00:17:17.510 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.510 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.770 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:17.770 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.770 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.770 [2024-12-09 14:49:55.866791] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:17.770 [2024-12-09 14:49:55.866909] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:17.770 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.770 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:17.770 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.770 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.770 [2024-12-09 14:49:55.878797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:17.770 [2024-12-09 14:49:55.880771] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:17.770 [2024-12-09 14:49:55.880863] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:17.770 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.770 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:17.770 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:17.770 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:17.770 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:17.770 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:17.770 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:17.770 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:17.770 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:17.770 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.770 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.770 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.770 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.030 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:18.030 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.030 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.030 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.030 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.030 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.030 "name": "Existed_Raid", 00:17:18.030 "uuid": "3e5d6a2b-3aac-40cd-9abc-7327628fb700", 00:17:18.030 "strip_size_kb": 0, 00:17:18.030 "state": "configuring", 00:17:18.030 "raid_level": "raid1", 00:17:18.030 "superblock": true, 00:17:18.030 "num_base_bdevs": 2, 00:17:18.030 "num_base_bdevs_discovered": 1, 00:17:18.030 "num_base_bdevs_operational": 2, 00:17:18.030 "base_bdevs_list": [ 00:17:18.030 { 00:17:18.030 "name": "BaseBdev1", 00:17:18.030 "uuid": "996cae29-9be1-4c74-b5c7-7c807fd9101a", 00:17:18.030 "is_configured": true, 00:17:18.030 "data_offset": 256, 00:17:18.030 "data_size": 7936 00:17:18.030 }, 00:17:18.030 { 00:17:18.030 "name": "BaseBdev2", 00:17:18.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.030 "is_configured": false, 00:17:18.030 "data_offset": 0, 00:17:18.030 "data_size": 0 00:17:18.030 } 00:17:18.030 ] 00:17:18.030 }' 00:17:18.030 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.030 14:49:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.291 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:17:18.291 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.291 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.291 [2024-12-09 14:49:56.341054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:18.291 [2024-12-09 14:49:56.341418] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:18.291 [2024-12-09 14:49:56.341474] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:18.291 [2024-12-09 14:49:56.341795] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:18.291 BaseBdev2 00:17:18.291 [2024-12-09 14:49:56.342020] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:18.291 [2024-12-09 14:49:56.342037] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:18.291 [2024-12-09 14:49:56.342177] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:18.291 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.291 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:18.291 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:18.291 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:18.291 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:17:18.291 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:18.291 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:18.291 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:18.291 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.291 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.291 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.291 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:18.291 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.291 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.291 [ 00:17:18.291 { 00:17:18.291 "name": "BaseBdev2", 00:17:18.291 "aliases": [ 00:17:18.291 "77b0c032-70d0-4350-abca-22225e660480" 00:17:18.291 ], 00:17:18.291 "product_name": "Malloc disk", 00:17:18.291 "block_size": 4096, 00:17:18.291 "num_blocks": 8192, 00:17:18.291 "uuid": "77b0c032-70d0-4350-abca-22225e660480", 00:17:18.291 "assigned_rate_limits": { 00:17:18.291 "rw_ios_per_sec": 0, 00:17:18.291 "rw_mbytes_per_sec": 0, 00:17:18.291 "r_mbytes_per_sec": 0, 00:17:18.291 "w_mbytes_per_sec": 0 00:17:18.291 }, 00:17:18.291 "claimed": true, 00:17:18.291 "claim_type": "exclusive_write", 00:17:18.291 "zoned": false, 00:17:18.291 "supported_io_types": { 00:17:18.291 "read": true, 00:17:18.291 "write": true, 00:17:18.291 "unmap": true, 00:17:18.291 "flush": true, 00:17:18.291 "reset": true, 00:17:18.291 "nvme_admin": false, 00:17:18.291 "nvme_io": false, 00:17:18.291 "nvme_io_md": false, 00:17:18.291 "write_zeroes": true, 00:17:18.291 "zcopy": true, 00:17:18.291 "get_zone_info": false, 00:17:18.291 "zone_management": false, 00:17:18.291 "zone_append": false, 00:17:18.291 "compare": false, 00:17:18.291 "compare_and_write": false, 00:17:18.291 "abort": true, 00:17:18.291 "seek_hole": false, 00:17:18.291 "seek_data": false, 00:17:18.291 "copy": true, 00:17:18.291 "nvme_iov_md": false 00:17:18.291 }, 00:17:18.291 "memory_domains": [ 00:17:18.291 { 00:17:18.291 "dma_device_id": "system", 00:17:18.291 "dma_device_type": 1 00:17:18.291 }, 00:17:18.291 { 00:17:18.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.291 "dma_device_type": 2 00:17:18.291 } 00:17:18.291 ], 00:17:18.291 "driver_specific": {} 00:17:18.291 } 00:17:18.291 ] 00:17:18.291 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.291 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:17:18.291 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:18.291 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:18.291 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:18.291 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:18.291 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.291 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:18.291 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:18.291 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:18.291 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.291 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.291 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.291 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.291 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:18.291 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.291 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.291 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.291 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.551 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.551 "name": "Existed_Raid", 00:17:18.551 "uuid": "3e5d6a2b-3aac-40cd-9abc-7327628fb700", 00:17:18.551 "strip_size_kb": 0, 00:17:18.551 "state": "online", 00:17:18.551 "raid_level": "raid1", 00:17:18.551 "superblock": true, 00:17:18.551 "num_base_bdevs": 2, 00:17:18.551 "num_base_bdevs_discovered": 2, 00:17:18.551 "num_base_bdevs_operational": 2, 00:17:18.551 "base_bdevs_list": [ 00:17:18.551 { 00:17:18.551 "name": "BaseBdev1", 00:17:18.551 "uuid": "996cae29-9be1-4c74-b5c7-7c807fd9101a", 00:17:18.551 "is_configured": true, 00:17:18.551 "data_offset": 256, 00:17:18.551 "data_size": 7936 00:17:18.551 }, 00:17:18.551 { 00:17:18.551 "name": "BaseBdev2", 00:17:18.551 "uuid": "77b0c032-70d0-4350-abca-22225e660480", 00:17:18.551 "is_configured": true, 00:17:18.551 "data_offset": 256, 00:17:18.551 "data_size": 7936 00:17:18.551 } 00:17:18.551 ] 00:17:18.551 }' 00:17:18.551 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.551 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.811 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:18.811 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:18.811 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:18.811 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:18.811 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:18.811 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:18.811 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:18.811 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:18.811 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.811 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.811 [2024-12-09 14:49:56.808651] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:18.811 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.811 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:18.811 "name": "Existed_Raid", 00:17:18.811 "aliases": [ 00:17:18.811 "3e5d6a2b-3aac-40cd-9abc-7327628fb700" 00:17:18.811 ], 00:17:18.811 "product_name": "Raid Volume", 00:17:18.811 "block_size": 4096, 00:17:18.811 "num_blocks": 7936, 00:17:18.811 "uuid": "3e5d6a2b-3aac-40cd-9abc-7327628fb700", 00:17:18.811 "assigned_rate_limits": { 00:17:18.811 "rw_ios_per_sec": 0, 00:17:18.811 "rw_mbytes_per_sec": 0, 00:17:18.811 "r_mbytes_per_sec": 0, 00:17:18.811 "w_mbytes_per_sec": 0 00:17:18.811 }, 00:17:18.811 "claimed": false, 00:17:18.811 "zoned": false, 00:17:18.811 "supported_io_types": { 00:17:18.811 "read": true, 00:17:18.811 "write": true, 00:17:18.811 "unmap": false, 00:17:18.811 "flush": false, 00:17:18.811 "reset": true, 00:17:18.811 "nvme_admin": false, 00:17:18.811 "nvme_io": false, 00:17:18.811 "nvme_io_md": false, 00:17:18.811 "write_zeroes": true, 00:17:18.811 "zcopy": false, 00:17:18.811 "get_zone_info": false, 00:17:18.811 "zone_management": false, 00:17:18.811 "zone_append": false, 00:17:18.811 "compare": false, 00:17:18.811 "compare_and_write": false, 00:17:18.811 "abort": false, 00:17:18.811 "seek_hole": false, 00:17:18.811 "seek_data": false, 00:17:18.811 "copy": false, 00:17:18.811 "nvme_iov_md": false 00:17:18.811 }, 00:17:18.811 "memory_domains": [ 00:17:18.811 { 00:17:18.811 "dma_device_id": "system", 00:17:18.811 "dma_device_type": 1 00:17:18.811 }, 00:17:18.811 { 00:17:18.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.811 "dma_device_type": 2 00:17:18.811 }, 00:17:18.811 { 00:17:18.811 "dma_device_id": "system", 00:17:18.811 "dma_device_type": 1 00:17:18.811 }, 00:17:18.811 { 00:17:18.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.811 "dma_device_type": 2 00:17:18.811 } 00:17:18.811 ], 00:17:18.811 "driver_specific": { 00:17:18.811 "raid": { 00:17:18.811 "uuid": "3e5d6a2b-3aac-40cd-9abc-7327628fb700", 00:17:18.811 "strip_size_kb": 0, 00:17:18.811 "state": "online", 00:17:18.811 "raid_level": "raid1", 00:17:18.811 "superblock": true, 00:17:18.811 "num_base_bdevs": 2, 00:17:18.811 "num_base_bdevs_discovered": 2, 00:17:18.811 "num_base_bdevs_operational": 2, 00:17:18.811 "base_bdevs_list": [ 00:17:18.811 { 00:17:18.811 "name": "BaseBdev1", 00:17:18.811 "uuid": "996cae29-9be1-4c74-b5c7-7c807fd9101a", 00:17:18.811 "is_configured": true, 00:17:18.811 "data_offset": 256, 00:17:18.811 "data_size": 7936 00:17:18.811 }, 00:17:18.811 { 00:17:18.811 "name": "BaseBdev2", 00:17:18.811 "uuid": "77b0c032-70d0-4350-abca-22225e660480", 00:17:18.811 "is_configured": true, 00:17:18.811 "data_offset": 256, 00:17:18.811 "data_size": 7936 00:17:18.811 } 00:17:18.811 ] 00:17:18.811 } 00:17:18.811 } 00:17:18.811 }' 00:17:18.811 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:18.811 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:18.811 BaseBdev2' 00:17:18.811 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:19.071 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:19.071 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:19.071 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:19.071 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:19.071 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.071 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.071 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.071 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:19.071 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:19.071 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:19.071 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:19.071 14:49:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:19.071 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.071 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.071 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.071 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:19.071 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:19.071 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:19.071 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.071 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.071 [2024-12-09 14:49:57.051949] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:19.071 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.071 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:19.071 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:19.071 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:19.071 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:19.071 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:19.071 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:19.071 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:19.071 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.071 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:19.071 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:19.071 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:19.071 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.071 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.071 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.072 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.072 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.072 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:19.072 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.072 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.072 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.331 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.331 "name": "Existed_Raid", 00:17:19.331 "uuid": "3e5d6a2b-3aac-40cd-9abc-7327628fb700", 00:17:19.331 "strip_size_kb": 0, 00:17:19.331 "state": "online", 00:17:19.331 "raid_level": "raid1", 00:17:19.331 "superblock": true, 00:17:19.331 "num_base_bdevs": 2, 00:17:19.331 "num_base_bdevs_discovered": 1, 00:17:19.331 "num_base_bdevs_operational": 1, 00:17:19.331 "base_bdevs_list": [ 00:17:19.331 { 00:17:19.331 "name": null, 00:17:19.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.331 "is_configured": false, 00:17:19.331 "data_offset": 0, 00:17:19.331 "data_size": 7936 00:17:19.331 }, 00:17:19.331 { 00:17:19.331 "name": "BaseBdev2", 00:17:19.331 "uuid": "77b0c032-70d0-4350-abca-22225e660480", 00:17:19.331 "is_configured": true, 00:17:19.331 "data_offset": 256, 00:17:19.331 "data_size": 7936 00:17:19.331 } 00:17:19.331 ] 00:17:19.331 }' 00:17:19.331 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.331 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.590 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:19.590 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:19.590 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.590 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.590 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.590 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:19.590 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.591 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:19.591 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:19.591 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:19.591 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.591 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.591 [2024-12-09 14:49:57.645620] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:19.591 [2024-12-09 14:49:57.645724] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:19.851 [2024-12-09 14:49:57.741314] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:19.851 [2024-12-09 14:49:57.741377] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:19.851 [2024-12-09 14:49:57.741402] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:19.851 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.851 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:19.851 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:19.851 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:19.851 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.851 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.851 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.851 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.851 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:19.851 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:19.851 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:19.851 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 87240 00:17:19.851 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 87240 ']' 00:17:19.851 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 87240 00:17:19.851 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:17:19.851 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:19.851 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87240 00:17:19.851 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:19.851 killing process with pid 87240 00:17:19.851 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:19.851 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87240' 00:17:19.851 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 87240 00:17:19.851 [2024-12-09 14:49:57.813773] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:19.851 14:49:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 87240 00:17:19.851 [2024-12-09 14:49:57.832987] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:21.233 14:49:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:17:21.233 00:17:21.233 real 0m4.987s 00:17:21.233 user 0m7.188s 00:17:21.233 sys 0m0.871s 00:17:21.233 14:49:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:21.233 14:49:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.233 ************************************ 00:17:21.233 END TEST raid_state_function_test_sb_4k 00:17:21.233 ************************************ 00:17:21.233 14:49:58 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:17:21.233 14:49:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:21.233 14:49:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:21.233 14:49:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:21.233 ************************************ 00:17:21.233 START TEST raid_superblock_test_4k 00:17:21.233 ************************************ 00:17:21.233 14:49:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:21.233 14:49:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:21.233 14:49:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:21.233 14:49:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:21.233 14:49:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:21.233 14:49:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:21.233 14:49:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:21.233 14:49:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:21.233 14:49:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:21.233 14:49:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:21.233 14:49:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:21.233 14:49:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:21.233 14:49:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:21.233 14:49:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:21.233 14:49:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:21.233 14:49:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:21.233 14:49:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=87490 00:17:21.233 14:49:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:21.233 14:49:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 87490 00:17:21.233 14:49:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 87490 ']' 00:17:21.233 14:49:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.233 14:49:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:21.233 14:49:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.233 14:49:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:21.233 14:49:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.233 [2024-12-09 14:49:59.097916] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:17:21.233 [2024-12-09 14:49:59.098084] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87490 ] 00:17:21.233 [2024-12-09 14:49:59.268326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.493 [2024-12-09 14:49:59.379045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.493 [2024-12-09 14:49:59.567987] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:21.493 [2024-12-09 14:49:59.568045] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:22.061 14:49:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:22.061 14:49:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:17:22.061 14:49:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:22.061 14:49:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:22.061 14:49:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:22.061 14:49:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:22.061 14:49:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:22.061 14:49:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:22.061 14:49:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:22.061 14:49:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:22.061 14:49:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:17:22.061 14:49:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.061 14:49:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.061 malloc1 00:17:22.061 14:49:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.061 14:49:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:22.061 14:49:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.061 14:49:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.062 [2024-12-09 14:49:59.984367] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:22.062 [2024-12-09 14:49:59.984468] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.062 [2024-12-09 14:49:59.984532] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:22.062 [2024-12-09 14:49:59.984562] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.062 [2024-12-09 14:49:59.986641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.062 [2024-12-09 14:49:59.986715] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:22.062 pt1 00:17:22.062 14:49:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.062 14:49:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:22.062 14:49:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:22.062 14:49:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:22.062 14:49:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:22.062 14:49:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:22.062 14:49:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:22.062 14:49:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:22.062 14:49:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:22.062 14:49:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:17:22.062 14:49:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.062 14:49:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.062 malloc2 00:17:22.062 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.062 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:22.062 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.062 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.062 [2024-12-09 14:50:00.042478] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:22.062 [2024-12-09 14:50:00.042536] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.062 [2024-12-09 14:50:00.042563] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:22.062 [2024-12-09 14:50:00.042584] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.062 [2024-12-09 14:50:00.044877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.062 [2024-12-09 14:50:00.044912] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:22.062 pt2 00:17:22.062 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.062 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:22.062 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:22.062 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:22.062 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.062 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.062 [2024-12-09 14:50:00.054501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:22.062 [2024-12-09 14:50:00.056264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:22.062 [2024-12-09 14:50:00.056443] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:22.062 [2024-12-09 14:50:00.056472] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:22.062 [2024-12-09 14:50:00.056824] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:22.062 [2024-12-09 14:50:00.057044] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:22.062 [2024-12-09 14:50:00.057099] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:22.062 [2024-12-09 14:50:00.057337] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:22.062 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.062 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:22.062 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:22.062 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:22.062 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:22.062 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:22.062 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:22.062 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.062 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.062 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.062 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.062 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.062 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.062 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.062 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.062 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.062 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.062 "name": "raid_bdev1", 00:17:22.062 "uuid": "819608dd-6c1f-4029-a26e-582936d620ba", 00:17:22.062 "strip_size_kb": 0, 00:17:22.062 "state": "online", 00:17:22.062 "raid_level": "raid1", 00:17:22.062 "superblock": true, 00:17:22.062 "num_base_bdevs": 2, 00:17:22.062 "num_base_bdevs_discovered": 2, 00:17:22.062 "num_base_bdevs_operational": 2, 00:17:22.062 "base_bdevs_list": [ 00:17:22.062 { 00:17:22.062 "name": "pt1", 00:17:22.062 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:22.062 "is_configured": true, 00:17:22.062 "data_offset": 256, 00:17:22.062 "data_size": 7936 00:17:22.062 }, 00:17:22.062 { 00:17:22.062 "name": "pt2", 00:17:22.062 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:22.062 "is_configured": true, 00:17:22.062 "data_offset": 256, 00:17:22.062 "data_size": 7936 00:17:22.062 } 00:17:22.062 ] 00:17:22.062 }' 00:17:22.062 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.062 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.632 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:22.632 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:22.632 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:22.632 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:22.632 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:22.632 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:22.632 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:22.632 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:22.632 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.632 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.632 [2024-12-09 14:50:00.525981] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:22.632 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.632 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:22.632 "name": "raid_bdev1", 00:17:22.632 "aliases": [ 00:17:22.632 "819608dd-6c1f-4029-a26e-582936d620ba" 00:17:22.632 ], 00:17:22.632 "product_name": "Raid Volume", 00:17:22.632 "block_size": 4096, 00:17:22.632 "num_blocks": 7936, 00:17:22.632 "uuid": "819608dd-6c1f-4029-a26e-582936d620ba", 00:17:22.632 "assigned_rate_limits": { 00:17:22.632 "rw_ios_per_sec": 0, 00:17:22.632 "rw_mbytes_per_sec": 0, 00:17:22.632 "r_mbytes_per_sec": 0, 00:17:22.632 "w_mbytes_per_sec": 0 00:17:22.632 }, 00:17:22.632 "claimed": false, 00:17:22.632 "zoned": false, 00:17:22.632 "supported_io_types": { 00:17:22.632 "read": true, 00:17:22.632 "write": true, 00:17:22.632 "unmap": false, 00:17:22.632 "flush": false, 00:17:22.632 "reset": true, 00:17:22.632 "nvme_admin": false, 00:17:22.632 "nvme_io": false, 00:17:22.632 "nvme_io_md": false, 00:17:22.632 "write_zeroes": true, 00:17:22.632 "zcopy": false, 00:17:22.632 "get_zone_info": false, 00:17:22.632 "zone_management": false, 00:17:22.632 "zone_append": false, 00:17:22.632 "compare": false, 00:17:22.632 "compare_and_write": false, 00:17:22.632 "abort": false, 00:17:22.632 "seek_hole": false, 00:17:22.632 "seek_data": false, 00:17:22.632 "copy": false, 00:17:22.632 "nvme_iov_md": false 00:17:22.632 }, 00:17:22.632 "memory_domains": [ 00:17:22.632 { 00:17:22.632 "dma_device_id": "system", 00:17:22.632 "dma_device_type": 1 00:17:22.632 }, 00:17:22.632 { 00:17:22.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.632 "dma_device_type": 2 00:17:22.632 }, 00:17:22.632 { 00:17:22.632 "dma_device_id": "system", 00:17:22.632 "dma_device_type": 1 00:17:22.632 }, 00:17:22.632 { 00:17:22.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.632 "dma_device_type": 2 00:17:22.632 } 00:17:22.632 ], 00:17:22.632 "driver_specific": { 00:17:22.632 "raid": { 00:17:22.632 "uuid": "819608dd-6c1f-4029-a26e-582936d620ba", 00:17:22.632 "strip_size_kb": 0, 00:17:22.632 "state": "online", 00:17:22.632 "raid_level": "raid1", 00:17:22.632 "superblock": true, 00:17:22.632 "num_base_bdevs": 2, 00:17:22.632 "num_base_bdevs_discovered": 2, 00:17:22.632 "num_base_bdevs_operational": 2, 00:17:22.633 "base_bdevs_list": [ 00:17:22.633 { 00:17:22.633 "name": "pt1", 00:17:22.633 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:22.633 "is_configured": true, 00:17:22.633 "data_offset": 256, 00:17:22.633 "data_size": 7936 00:17:22.633 }, 00:17:22.633 { 00:17:22.633 "name": "pt2", 00:17:22.633 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:22.633 "is_configured": true, 00:17:22.633 "data_offset": 256, 00:17:22.633 "data_size": 7936 00:17:22.633 } 00:17:22.633 ] 00:17:22.633 } 00:17:22.633 } 00:17:22.633 }' 00:17:22.633 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:22.633 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:22.633 pt2' 00:17:22.633 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:22.633 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:22.633 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:22.633 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:22.633 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.633 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.633 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:22.633 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.633 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:22.633 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:22.633 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:22.633 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:22.633 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:22.633 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.633 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.633 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:22.893 [2024-12-09 14:50:00.777542] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=819608dd-6c1f-4029-a26e-582936d620ba 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 819608dd-6c1f-4029-a26e-582936d620ba ']' 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.893 [2024-12-09 14:50:00.825155] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:22.893 [2024-12-09 14:50:00.825181] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:22.893 [2024-12-09 14:50:00.825265] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:22.893 [2024-12-09 14:50:00.825322] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:22.893 [2024-12-09 14:50:00.825334] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.893 [2024-12-09 14:50:00.964942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:22.893 [2024-12-09 14:50:00.966802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:22.893 [2024-12-09 14:50:00.966870] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:22.893 [2024-12-09 14:50:00.966927] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:22.893 [2024-12-09 14:50:00.966941] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:22.893 [2024-12-09 14:50:00.966952] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:22.893 request: 00:17:22.893 { 00:17:22.893 "name": "raid_bdev1", 00:17:22.893 "raid_level": "raid1", 00:17:22.893 "base_bdevs": [ 00:17:22.893 "malloc1", 00:17:22.893 "malloc2" 00:17:22.893 ], 00:17:22.893 "superblock": false, 00:17:22.893 "method": "bdev_raid_create", 00:17:22.893 "req_id": 1 00:17:22.893 } 00:17:22.893 Got JSON-RPC error response 00:17:22.893 response: 00:17:22.893 { 00:17:22.893 "code": -17, 00:17:22.893 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:22.893 } 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:22.893 14:50:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.154 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:23.154 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:23.154 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:23.154 14:50:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.154 14:50:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.154 [2024-12-09 14:50:01.032829] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:23.154 [2024-12-09 14:50:01.032930] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.154 [2024-12-09 14:50:01.032979] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:23.154 [2024-12-09 14:50:01.033011] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.154 [2024-12-09 14:50:01.035131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.154 [2024-12-09 14:50:01.035219] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:23.154 [2024-12-09 14:50:01.035360] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:23.154 [2024-12-09 14:50:01.035457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:23.154 pt1 00:17:23.154 14:50:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.154 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:23.154 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:23.154 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:23.154 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:23.154 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:23.154 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:23.154 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.154 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.154 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.154 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.154 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.154 14:50:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.154 14:50:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.154 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.154 14:50:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.154 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.154 "name": "raid_bdev1", 00:17:23.154 "uuid": "819608dd-6c1f-4029-a26e-582936d620ba", 00:17:23.154 "strip_size_kb": 0, 00:17:23.154 "state": "configuring", 00:17:23.154 "raid_level": "raid1", 00:17:23.154 "superblock": true, 00:17:23.154 "num_base_bdevs": 2, 00:17:23.154 "num_base_bdevs_discovered": 1, 00:17:23.154 "num_base_bdevs_operational": 2, 00:17:23.154 "base_bdevs_list": [ 00:17:23.154 { 00:17:23.154 "name": "pt1", 00:17:23.154 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:23.154 "is_configured": true, 00:17:23.154 "data_offset": 256, 00:17:23.154 "data_size": 7936 00:17:23.154 }, 00:17:23.154 { 00:17:23.154 "name": null, 00:17:23.154 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:23.154 "is_configured": false, 00:17:23.154 "data_offset": 256, 00:17:23.154 "data_size": 7936 00:17:23.154 } 00:17:23.154 ] 00:17:23.154 }' 00:17:23.154 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.154 14:50:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.414 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:23.414 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:23.415 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:23.415 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:23.415 14:50:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.415 14:50:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.415 [2024-12-09 14:50:01.476136] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:23.415 [2024-12-09 14:50:01.476212] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.415 [2024-12-09 14:50:01.476236] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:23.415 [2024-12-09 14:50:01.476247] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.415 [2024-12-09 14:50:01.476730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.415 [2024-12-09 14:50:01.476751] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:23.415 [2024-12-09 14:50:01.476836] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:23.415 [2024-12-09 14:50:01.476863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:23.415 [2024-12-09 14:50:01.476989] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:23.415 [2024-12-09 14:50:01.477006] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:23.415 [2024-12-09 14:50:01.477245] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:23.415 [2024-12-09 14:50:01.477411] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:23.415 [2024-12-09 14:50:01.477419] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:23.415 [2024-12-09 14:50:01.477562] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:23.415 pt2 00:17:23.415 14:50:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.415 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:23.415 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:23.415 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:23.415 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:23.415 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:23.415 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:23.415 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:23.415 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:23.415 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.415 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.415 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.415 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.415 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.415 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.415 14:50:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.415 14:50:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.415 14:50:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.674 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.674 "name": "raid_bdev1", 00:17:23.674 "uuid": "819608dd-6c1f-4029-a26e-582936d620ba", 00:17:23.674 "strip_size_kb": 0, 00:17:23.674 "state": "online", 00:17:23.674 "raid_level": "raid1", 00:17:23.674 "superblock": true, 00:17:23.674 "num_base_bdevs": 2, 00:17:23.674 "num_base_bdevs_discovered": 2, 00:17:23.674 "num_base_bdevs_operational": 2, 00:17:23.674 "base_bdevs_list": [ 00:17:23.674 { 00:17:23.674 "name": "pt1", 00:17:23.674 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:23.674 "is_configured": true, 00:17:23.674 "data_offset": 256, 00:17:23.674 "data_size": 7936 00:17:23.674 }, 00:17:23.674 { 00:17:23.674 "name": "pt2", 00:17:23.674 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:23.674 "is_configured": true, 00:17:23.674 "data_offset": 256, 00:17:23.674 "data_size": 7936 00:17:23.674 } 00:17:23.674 ] 00:17:23.674 }' 00:17:23.674 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.674 14:50:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.932 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:23.932 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:23.932 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:23.932 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:23.932 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:23.932 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:23.932 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:23.932 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:23.932 14:50:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.932 14:50:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.932 [2024-12-09 14:50:01.919684] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:23.932 14:50:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.933 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:23.933 "name": "raid_bdev1", 00:17:23.933 "aliases": [ 00:17:23.933 "819608dd-6c1f-4029-a26e-582936d620ba" 00:17:23.933 ], 00:17:23.933 "product_name": "Raid Volume", 00:17:23.933 "block_size": 4096, 00:17:23.933 "num_blocks": 7936, 00:17:23.933 "uuid": "819608dd-6c1f-4029-a26e-582936d620ba", 00:17:23.933 "assigned_rate_limits": { 00:17:23.933 "rw_ios_per_sec": 0, 00:17:23.933 "rw_mbytes_per_sec": 0, 00:17:23.933 "r_mbytes_per_sec": 0, 00:17:23.933 "w_mbytes_per_sec": 0 00:17:23.933 }, 00:17:23.933 "claimed": false, 00:17:23.933 "zoned": false, 00:17:23.933 "supported_io_types": { 00:17:23.933 "read": true, 00:17:23.933 "write": true, 00:17:23.933 "unmap": false, 00:17:23.933 "flush": false, 00:17:23.933 "reset": true, 00:17:23.933 "nvme_admin": false, 00:17:23.933 "nvme_io": false, 00:17:23.933 "nvme_io_md": false, 00:17:23.933 "write_zeroes": true, 00:17:23.933 "zcopy": false, 00:17:23.933 "get_zone_info": false, 00:17:23.933 "zone_management": false, 00:17:23.933 "zone_append": false, 00:17:23.933 "compare": false, 00:17:23.933 "compare_and_write": false, 00:17:23.933 "abort": false, 00:17:23.933 "seek_hole": false, 00:17:23.933 "seek_data": false, 00:17:23.933 "copy": false, 00:17:23.933 "nvme_iov_md": false 00:17:23.933 }, 00:17:23.933 "memory_domains": [ 00:17:23.933 { 00:17:23.933 "dma_device_id": "system", 00:17:23.933 "dma_device_type": 1 00:17:23.933 }, 00:17:23.933 { 00:17:23.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.933 "dma_device_type": 2 00:17:23.933 }, 00:17:23.933 { 00:17:23.933 "dma_device_id": "system", 00:17:23.933 "dma_device_type": 1 00:17:23.933 }, 00:17:23.933 { 00:17:23.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.933 "dma_device_type": 2 00:17:23.933 } 00:17:23.933 ], 00:17:23.933 "driver_specific": { 00:17:23.933 "raid": { 00:17:23.933 "uuid": "819608dd-6c1f-4029-a26e-582936d620ba", 00:17:23.933 "strip_size_kb": 0, 00:17:23.933 "state": "online", 00:17:23.933 "raid_level": "raid1", 00:17:23.933 "superblock": true, 00:17:23.933 "num_base_bdevs": 2, 00:17:23.933 "num_base_bdevs_discovered": 2, 00:17:23.933 "num_base_bdevs_operational": 2, 00:17:23.933 "base_bdevs_list": [ 00:17:23.933 { 00:17:23.933 "name": "pt1", 00:17:23.933 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:23.933 "is_configured": true, 00:17:23.933 "data_offset": 256, 00:17:23.933 "data_size": 7936 00:17:23.933 }, 00:17:23.933 { 00:17:23.933 "name": "pt2", 00:17:23.933 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:23.933 "is_configured": true, 00:17:23.933 "data_offset": 256, 00:17:23.933 "data_size": 7936 00:17:23.933 } 00:17:23.933 ] 00:17:23.933 } 00:17:23.933 } 00:17:23.933 }' 00:17:23.933 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:23.933 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:23.933 pt2' 00:17:23.933 14:50:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:23.933 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:23.933 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:23.933 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:23.933 14:50:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.933 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:23.933 14:50:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.933 14:50:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.192 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:24.192 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:24.192 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:24.192 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:24.192 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:24.192 14:50:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.192 14:50:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.192 14:50:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.192 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:24.192 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:24.192 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:24.192 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:24.192 14:50:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.192 14:50:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.192 [2024-12-09 14:50:02.127305] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:24.192 14:50:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.192 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 819608dd-6c1f-4029-a26e-582936d620ba '!=' 819608dd-6c1f-4029-a26e-582936d620ba ']' 00:17:24.192 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:24.192 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:24.192 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:24.192 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:24.192 14:50:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.192 14:50:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.192 [2024-12-09 14:50:02.170959] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:24.192 14:50:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.192 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:24.192 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:24.192 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.192 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:24.192 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:24.192 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:24.192 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.192 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.192 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.192 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.192 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.192 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.193 14:50:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.193 14:50:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.193 14:50:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.193 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.193 "name": "raid_bdev1", 00:17:24.193 "uuid": "819608dd-6c1f-4029-a26e-582936d620ba", 00:17:24.193 "strip_size_kb": 0, 00:17:24.193 "state": "online", 00:17:24.193 "raid_level": "raid1", 00:17:24.193 "superblock": true, 00:17:24.193 "num_base_bdevs": 2, 00:17:24.193 "num_base_bdevs_discovered": 1, 00:17:24.193 "num_base_bdevs_operational": 1, 00:17:24.193 "base_bdevs_list": [ 00:17:24.193 { 00:17:24.193 "name": null, 00:17:24.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.193 "is_configured": false, 00:17:24.193 "data_offset": 0, 00:17:24.193 "data_size": 7936 00:17:24.193 }, 00:17:24.193 { 00:17:24.193 "name": "pt2", 00:17:24.193 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:24.193 "is_configured": true, 00:17:24.193 "data_offset": 256, 00:17:24.193 "data_size": 7936 00:17:24.193 } 00:17:24.193 ] 00:17:24.193 }' 00:17:24.193 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.193 14:50:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.762 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:24.762 14:50:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.762 14:50:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.762 [2024-12-09 14:50:02.598204] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:24.762 [2024-12-09 14:50:02.598278] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:24.762 [2024-12-09 14:50:02.598381] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:24.762 [2024-12-09 14:50:02.598455] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:24.762 [2024-12-09 14:50:02.598507] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:24.762 14:50:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.762 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.762 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:24.762 14:50:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.762 14:50:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.762 14:50:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.762 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:24.762 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:24.762 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:24.762 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:24.762 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:24.762 14:50:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.762 14:50:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.762 14:50:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.762 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:24.762 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:24.762 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:24.762 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:24.762 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:17:24.762 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:24.762 14:50:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.762 14:50:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.762 [2024-12-09 14:50:02.670067] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:24.763 [2024-12-09 14:50:02.670166] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.763 [2024-12-09 14:50:02.670189] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:24.763 [2024-12-09 14:50:02.670201] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.763 [2024-12-09 14:50:02.672489] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.763 [2024-12-09 14:50:02.672532] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:24.763 [2024-12-09 14:50:02.672633] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:24.763 [2024-12-09 14:50:02.672683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:24.763 [2024-12-09 14:50:02.672813] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:24.763 [2024-12-09 14:50:02.672825] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:24.763 [2024-12-09 14:50:02.673058] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:24.763 [2024-12-09 14:50:02.673211] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:24.763 [2024-12-09 14:50:02.673221] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:24.763 [2024-12-09 14:50:02.673379] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.763 pt2 00:17:24.763 14:50:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.763 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:24.763 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:24.763 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.763 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:24.763 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:24.763 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:24.763 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.763 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.763 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.763 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.763 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.763 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.763 14:50:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.763 14:50:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.763 14:50:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.763 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.763 "name": "raid_bdev1", 00:17:24.763 "uuid": "819608dd-6c1f-4029-a26e-582936d620ba", 00:17:24.763 "strip_size_kb": 0, 00:17:24.763 "state": "online", 00:17:24.763 "raid_level": "raid1", 00:17:24.763 "superblock": true, 00:17:24.763 "num_base_bdevs": 2, 00:17:24.763 "num_base_bdevs_discovered": 1, 00:17:24.763 "num_base_bdevs_operational": 1, 00:17:24.763 "base_bdevs_list": [ 00:17:24.763 { 00:17:24.763 "name": null, 00:17:24.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.763 "is_configured": false, 00:17:24.763 "data_offset": 256, 00:17:24.763 "data_size": 7936 00:17:24.763 }, 00:17:24.763 { 00:17:24.763 "name": "pt2", 00:17:24.763 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:24.763 "is_configured": true, 00:17:24.763 "data_offset": 256, 00:17:24.763 "data_size": 7936 00:17:24.763 } 00:17:24.763 ] 00:17:24.763 }' 00:17:24.763 14:50:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.763 14:50:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.021 14:50:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:25.021 14:50:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.021 14:50:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.021 [2024-12-09 14:50:03.129286] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:25.021 [2024-12-09 14:50:03.129319] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:25.021 [2024-12-09 14:50:03.129397] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:25.021 [2024-12-09 14:50:03.129448] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:25.021 [2024-12-09 14:50:03.129457] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:25.021 14:50:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.021 14:50:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:25.021 14:50:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.021 14:50:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.021 14:50:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.280 14:50:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.280 14:50:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:25.280 14:50:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:25.280 14:50:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:25.280 14:50:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:25.280 14:50:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.280 14:50:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.280 [2024-12-09 14:50:03.173234] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:25.280 [2024-12-09 14:50:03.173360] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.280 [2024-12-09 14:50:03.173386] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:25.280 [2024-12-09 14:50:03.173395] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.280 [2024-12-09 14:50:03.175615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.280 [2024-12-09 14:50:03.175654] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:25.280 [2024-12-09 14:50:03.175752] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:25.280 [2024-12-09 14:50:03.175807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:25.280 [2024-12-09 14:50:03.175972] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:25.280 [2024-12-09 14:50:03.175984] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:25.280 [2024-12-09 14:50:03.176001] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:25.280 [2024-12-09 14:50:03.176053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:25.280 [2024-12-09 14:50:03.176129] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:25.280 [2024-12-09 14:50:03.176137] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:25.280 [2024-12-09 14:50:03.176390] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:25.280 [2024-12-09 14:50:03.176647] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:25.280 [2024-12-09 14:50:03.176668] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:25.280 [2024-12-09 14:50:03.176842] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.280 pt1 00:17:25.280 14:50:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.280 14:50:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:25.281 14:50:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:25.281 14:50:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:25.281 14:50:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.281 14:50:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:25.281 14:50:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:25.281 14:50:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:25.281 14:50:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.281 14:50:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.281 14:50:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.281 14:50:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.281 14:50:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.281 14:50:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.281 14:50:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.281 14:50:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.281 14:50:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.281 14:50:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.281 "name": "raid_bdev1", 00:17:25.281 "uuid": "819608dd-6c1f-4029-a26e-582936d620ba", 00:17:25.281 "strip_size_kb": 0, 00:17:25.281 "state": "online", 00:17:25.281 "raid_level": "raid1", 00:17:25.281 "superblock": true, 00:17:25.281 "num_base_bdevs": 2, 00:17:25.281 "num_base_bdevs_discovered": 1, 00:17:25.281 "num_base_bdevs_operational": 1, 00:17:25.281 "base_bdevs_list": [ 00:17:25.281 { 00:17:25.281 "name": null, 00:17:25.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.281 "is_configured": false, 00:17:25.281 "data_offset": 256, 00:17:25.281 "data_size": 7936 00:17:25.281 }, 00:17:25.281 { 00:17:25.281 "name": "pt2", 00:17:25.281 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:25.281 "is_configured": true, 00:17:25.281 "data_offset": 256, 00:17:25.281 "data_size": 7936 00:17:25.281 } 00:17:25.281 ] 00:17:25.281 }' 00:17:25.281 14:50:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.281 14:50:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.891 14:50:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:25.891 14:50:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:25.891 14:50:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.891 14:50:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.891 14:50:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.891 14:50:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:25.892 14:50:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:25.892 14:50:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:25.892 14:50:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.892 14:50:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.892 [2024-12-09 14:50:03.724553] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:25.892 14:50:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.892 14:50:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 819608dd-6c1f-4029-a26e-582936d620ba '!=' 819608dd-6c1f-4029-a26e-582936d620ba ']' 00:17:25.892 14:50:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 87490 00:17:25.892 14:50:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 87490 ']' 00:17:25.892 14:50:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 87490 00:17:25.892 14:50:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:17:25.892 14:50:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:25.892 14:50:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87490 00:17:25.892 killing process with pid 87490 00:17:25.892 14:50:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:25.892 14:50:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:25.892 14:50:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87490' 00:17:25.892 14:50:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 87490 00:17:25.892 [2024-12-09 14:50:03.788318] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:25.892 [2024-12-09 14:50:03.788418] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:25.892 [2024-12-09 14:50:03.788481] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:25.892 [2024-12-09 14:50:03.788496] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:25.892 14:50:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 87490 00:17:25.892 [2024-12-09 14:50:03.992840] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:27.278 14:50:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:17:27.278 00:17:27.278 real 0m6.100s 00:17:27.278 user 0m9.236s 00:17:27.278 sys 0m1.099s 00:17:27.278 14:50:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:27.278 ************************************ 00:17:27.278 END TEST raid_superblock_test_4k 00:17:27.278 ************************************ 00:17:27.278 14:50:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.278 14:50:05 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:17:27.278 14:50:05 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:17:27.278 14:50:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:27.278 14:50:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:27.278 14:50:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:27.278 ************************************ 00:17:27.278 START TEST raid_rebuild_test_sb_4k 00:17:27.278 ************************************ 00:17:27.278 14:50:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:27.278 14:50:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:27.278 14:50:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:27.278 14:50:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:27.278 14:50:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:27.278 14:50:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:27.278 14:50:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:27.278 14:50:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:27.278 14:50:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:27.278 14:50:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:27.278 14:50:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:27.278 14:50:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:27.278 14:50:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:27.278 14:50:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:27.278 14:50:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:27.278 14:50:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:27.278 14:50:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:27.278 14:50:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:27.278 14:50:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:27.278 14:50:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:27.278 14:50:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:27.278 14:50:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:27.278 14:50:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:27.278 14:50:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:27.278 14:50:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:27.278 14:50:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=87818 00:17:27.278 14:50:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:27.278 14:50:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 87818 00:17:27.278 14:50:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 87818 ']' 00:17:27.278 14:50:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.278 14:50:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:27.278 14:50:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.278 14:50:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:27.278 14:50:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.278 [2024-12-09 14:50:05.278790] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:17:27.278 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:27.278 Zero copy mechanism will not be used. 00:17:27.278 [2024-12-09 14:50:05.278992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87818 ] 00:17:27.538 [2024-12-09 14:50:05.433538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.538 [2024-12-09 14:50:05.547962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.797 [2024-12-09 14:50:05.748674] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:27.797 [2024-12-09 14:50:05.748804] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:28.057 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:28.057 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:17:28.057 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:28.057 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:17:28.057 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.057 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.057 BaseBdev1_malloc 00:17:28.057 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.057 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:28.057 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.057 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.057 [2024-12-09 14:50:06.170630] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:28.057 [2024-12-09 14:50:06.170687] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.057 [2024-12-09 14:50:06.170708] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:28.057 [2024-12-09 14:50:06.170719] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.057 [2024-12-09 14:50:06.172821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.057 [2024-12-09 14:50:06.172946] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:28.057 BaseBdev1 00:17:28.057 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.057 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:28.057 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:17:28.057 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.057 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.317 BaseBdev2_malloc 00:17:28.317 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.317 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:28.317 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.317 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.317 [2024-12-09 14:50:06.225202] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:28.317 [2024-12-09 14:50:06.225311] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.317 [2024-12-09 14:50:06.225359] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:28.317 [2024-12-09 14:50:06.225392] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.317 [2024-12-09 14:50:06.227558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.317 [2024-12-09 14:50:06.227646] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:28.317 BaseBdev2 00:17:28.317 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.317 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:17:28.317 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.317 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.317 spare_malloc 00:17:28.317 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.317 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:28.317 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.317 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.317 spare_delay 00:17:28.317 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.317 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:28.317 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.317 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.317 [2024-12-09 14:50:06.304460] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:28.317 [2024-12-09 14:50:06.304516] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.317 [2024-12-09 14:50:06.304534] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:28.317 [2024-12-09 14:50:06.304544] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.317 [2024-12-09 14:50:06.306617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.317 [2024-12-09 14:50:06.306688] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:28.317 spare 00:17:28.317 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.317 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:28.317 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.317 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.317 [2024-12-09 14:50:06.316539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:28.317 [2024-12-09 14:50:06.318402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:28.317 [2024-12-09 14:50:06.318683] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:28.317 [2024-12-09 14:50:06.318738] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:28.317 [2024-12-09 14:50:06.319037] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:28.317 [2024-12-09 14:50:06.319267] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:28.317 [2024-12-09 14:50:06.319307] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:28.317 [2024-12-09 14:50:06.319526] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:28.317 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.317 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:28.317 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:28.317 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:28.317 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:28.317 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:28.317 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:28.317 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.317 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.317 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.317 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.317 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.317 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.317 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.317 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.317 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.317 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.317 "name": "raid_bdev1", 00:17:28.317 "uuid": "4264f8d5-7d91-4467-a0a6-b8d0b7c6a18f", 00:17:28.317 "strip_size_kb": 0, 00:17:28.317 "state": "online", 00:17:28.317 "raid_level": "raid1", 00:17:28.317 "superblock": true, 00:17:28.317 "num_base_bdevs": 2, 00:17:28.317 "num_base_bdevs_discovered": 2, 00:17:28.317 "num_base_bdevs_operational": 2, 00:17:28.317 "base_bdevs_list": [ 00:17:28.317 { 00:17:28.317 "name": "BaseBdev1", 00:17:28.318 "uuid": "a9303ae1-bde0-5295-a1ac-1e3d773b68b9", 00:17:28.318 "is_configured": true, 00:17:28.318 "data_offset": 256, 00:17:28.318 "data_size": 7936 00:17:28.318 }, 00:17:28.318 { 00:17:28.318 "name": "BaseBdev2", 00:17:28.318 "uuid": "3130c707-aa7c-5635-a972-0ec942990c00", 00:17:28.318 "is_configured": true, 00:17:28.318 "data_offset": 256, 00:17:28.318 "data_size": 7936 00:17:28.318 } 00:17:28.318 ] 00:17:28.318 }' 00:17:28.318 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.318 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.885 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:28.885 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:28.885 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.885 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.885 [2024-12-09 14:50:06.808039] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:28.885 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.885 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:28.885 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.885 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.885 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.885 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:28.885 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.885 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:28.885 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:28.885 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:28.885 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:28.885 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:28.885 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:28.885 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:28.885 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:28.885 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:28.885 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:28.885 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:28.885 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:28.885 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:28.885 14:50:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:29.145 [2024-12-09 14:50:07.107276] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:29.145 /dev/nbd0 00:17:29.145 14:50:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:29.145 14:50:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:29.145 14:50:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:29.145 14:50:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:29.145 14:50:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:29.145 14:50:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:29.145 14:50:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:29.145 14:50:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:29.145 14:50:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:29.145 14:50:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:29.145 14:50:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:29.145 1+0 records in 00:17:29.145 1+0 records out 00:17:29.145 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293797 s, 13.9 MB/s 00:17:29.145 14:50:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:29.145 14:50:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:29.145 14:50:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:29.145 14:50:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:29.145 14:50:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:29.145 14:50:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:29.145 14:50:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:29.145 14:50:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:29.145 14:50:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:29.145 14:50:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:29.714 7936+0 records in 00:17:29.714 7936+0 records out 00:17:29.714 32505856 bytes (33 MB, 31 MiB) copied, 0.606536 s, 53.6 MB/s 00:17:29.714 14:50:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:29.714 14:50:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:29.714 14:50:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:29.714 14:50:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:29.714 14:50:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:29.714 14:50:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:29.714 14:50:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:29.974 [2024-12-09 14:50:08.005472] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:29.974 14:50:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:29.974 14:50:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:29.974 14:50:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:29.974 14:50:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:29.974 14:50:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:29.974 14:50:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:29.974 14:50:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:29.974 14:50:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:29.974 14:50:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:29.974 14:50:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.974 14:50:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.974 [2024-12-09 14:50:08.047460] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:29.974 14:50:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.974 14:50:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:29.974 14:50:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:29.974 14:50:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:29.974 14:50:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:29.974 14:50:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:29.974 14:50:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:29.974 14:50:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.974 14:50:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.974 14:50:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.974 14:50:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.974 14:50:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.974 14:50:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.974 14:50:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.974 14:50:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.974 14:50:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.233 14:50:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.233 "name": "raid_bdev1", 00:17:30.234 "uuid": "4264f8d5-7d91-4467-a0a6-b8d0b7c6a18f", 00:17:30.234 "strip_size_kb": 0, 00:17:30.234 "state": "online", 00:17:30.234 "raid_level": "raid1", 00:17:30.234 "superblock": true, 00:17:30.234 "num_base_bdevs": 2, 00:17:30.234 "num_base_bdevs_discovered": 1, 00:17:30.234 "num_base_bdevs_operational": 1, 00:17:30.234 "base_bdevs_list": [ 00:17:30.234 { 00:17:30.234 "name": null, 00:17:30.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.234 "is_configured": false, 00:17:30.234 "data_offset": 0, 00:17:30.234 "data_size": 7936 00:17:30.234 }, 00:17:30.234 { 00:17:30.234 "name": "BaseBdev2", 00:17:30.234 "uuid": "3130c707-aa7c-5635-a972-0ec942990c00", 00:17:30.234 "is_configured": true, 00:17:30.234 "data_offset": 256, 00:17:30.234 "data_size": 7936 00:17:30.234 } 00:17:30.234 ] 00:17:30.234 }' 00:17:30.234 14:50:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.234 14:50:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.494 14:50:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:30.494 14:50:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.494 14:50:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.494 [2024-12-09 14:50:08.474752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:30.494 [2024-12-09 14:50:08.491898] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:30.494 14:50:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.494 14:50:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:30.494 [2024-12-09 14:50:08.493753] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:31.433 14:50:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:31.433 14:50:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:31.433 14:50:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:31.433 14:50:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:31.433 14:50:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:31.433 14:50:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.433 14:50:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.433 14:50:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.433 14:50:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.433 14:50:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.433 14:50:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:31.433 "name": "raid_bdev1", 00:17:31.433 "uuid": "4264f8d5-7d91-4467-a0a6-b8d0b7c6a18f", 00:17:31.433 "strip_size_kb": 0, 00:17:31.433 "state": "online", 00:17:31.433 "raid_level": "raid1", 00:17:31.433 "superblock": true, 00:17:31.433 "num_base_bdevs": 2, 00:17:31.433 "num_base_bdevs_discovered": 2, 00:17:31.433 "num_base_bdevs_operational": 2, 00:17:31.433 "process": { 00:17:31.433 "type": "rebuild", 00:17:31.433 "target": "spare", 00:17:31.433 "progress": { 00:17:31.433 "blocks": 2560, 00:17:31.433 "percent": 32 00:17:31.433 } 00:17:31.433 }, 00:17:31.433 "base_bdevs_list": [ 00:17:31.433 { 00:17:31.433 "name": "spare", 00:17:31.433 "uuid": "3d541ccd-1b36-55de-bb6f-e4dd53c74820", 00:17:31.433 "is_configured": true, 00:17:31.433 "data_offset": 256, 00:17:31.433 "data_size": 7936 00:17:31.433 }, 00:17:31.433 { 00:17:31.433 "name": "BaseBdev2", 00:17:31.433 "uuid": "3130c707-aa7c-5635-a972-0ec942990c00", 00:17:31.433 "is_configured": true, 00:17:31.433 "data_offset": 256, 00:17:31.433 "data_size": 7936 00:17:31.433 } 00:17:31.433 ] 00:17:31.433 }' 00:17:31.433 14:50:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:31.694 14:50:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:31.694 14:50:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:31.694 14:50:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:31.694 14:50:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:31.694 14:50:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.694 14:50:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.694 [2024-12-09 14:50:09.629180] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:31.694 [2024-12-09 14:50:09.699513] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:31.694 [2024-12-09 14:50:09.699711] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:31.694 [2024-12-09 14:50:09.699730] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:31.694 [2024-12-09 14:50:09.699744] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:31.694 14:50:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.694 14:50:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:31.694 14:50:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:31.694 14:50:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:31.694 14:50:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:31.694 14:50:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:31.694 14:50:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:31.694 14:50:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.694 14:50:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.694 14:50:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.694 14:50:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.694 14:50:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.694 14:50:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.694 14:50:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.694 14:50:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.694 14:50:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.694 14:50:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.694 "name": "raid_bdev1", 00:17:31.694 "uuid": "4264f8d5-7d91-4467-a0a6-b8d0b7c6a18f", 00:17:31.694 "strip_size_kb": 0, 00:17:31.694 "state": "online", 00:17:31.694 "raid_level": "raid1", 00:17:31.694 "superblock": true, 00:17:31.694 "num_base_bdevs": 2, 00:17:31.694 "num_base_bdevs_discovered": 1, 00:17:31.694 "num_base_bdevs_operational": 1, 00:17:31.694 "base_bdevs_list": [ 00:17:31.694 { 00:17:31.694 "name": null, 00:17:31.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.694 "is_configured": false, 00:17:31.694 "data_offset": 0, 00:17:31.694 "data_size": 7936 00:17:31.694 }, 00:17:31.694 { 00:17:31.694 "name": "BaseBdev2", 00:17:31.694 "uuid": "3130c707-aa7c-5635-a972-0ec942990c00", 00:17:31.694 "is_configured": true, 00:17:31.694 "data_offset": 256, 00:17:31.694 "data_size": 7936 00:17:31.694 } 00:17:31.694 ] 00:17:31.694 }' 00:17:31.694 14:50:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.694 14:50:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.263 14:50:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:32.263 14:50:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:32.263 14:50:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:32.263 14:50:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:32.263 14:50:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:32.263 14:50:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.263 14:50:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.263 14:50:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.263 14:50:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.263 14:50:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.263 14:50:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:32.263 "name": "raid_bdev1", 00:17:32.263 "uuid": "4264f8d5-7d91-4467-a0a6-b8d0b7c6a18f", 00:17:32.263 "strip_size_kb": 0, 00:17:32.263 "state": "online", 00:17:32.263 "raid_level": "raid1", 00:17:32.263 "superblock": true, 00:17:32.263 "num_base_bdevs": 2, 00:17:32.263 "num_base_bdevs_discovered": 1, 00:17:32.263 "num_base_bdevs_operational": 1, 00:17:32.263 "base_bdevs_list": [ 00:17:32.263 { 00:17:32.263 "name": null, 00:17:32.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.263 "is_configured": false, 00:17:32.263 "data_offset": 0, 00:17:32.263 "data_size": 7936 00:17:32.263 }, 00:17:32.263 { 00:17:32.263 "name": "BaseBdev2", 00:17:32.263 "uuid": "3130c707-aa7c-5635-a972-0ec942990c00", 00:17:32.263 "is_configured": true, 00:17:32.263 "data_offset": 256, 00:17:32.263 "data_size": 7936 00:17:32.263 } 00:17:32.263 ] 00:17:32.263 }' 00:17:32.263 14:50:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:32.263 14:50:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:32.263 14:50:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.263 14:50:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:32.263 14:50:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:32.263 14:50:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.263 14:50:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.263 [2024-12-09 14:50:10.285721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:32.263 [2024-12-09 14:50:10.302822] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:32.263 14:50:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.263 14:50:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:32.263 [2024-12-09 14:50:10.304834] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:33.203 14:50:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:33.203 14:50:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:33.203 14:50:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:33.203 14:50:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:33.203 14:50:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:33.203 14:50:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.203 14:50:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.203 14:50:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.203 14:50:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.463 14:50:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.463 14:50:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:33.463 "name": "raid_bdev1", 00:17:33.463 "uuid": "4264f8d5-7d91-4467-a0a6-b8d0b7c6a18f", 00:17:33.463 "strip_size_kb": 0, 00:17:33.463 "state": "online", 00:17:33.463 "raid_level": "raid1", 00:17:33.463 "superblock": true, 00:17:33.463 "num_base_bdevs": 2, 00:17:33.463 "num_base_bdevs_discovered": 2, 00:17:33.463 "num_base_bdevs_operational": 2, 00:17:33.463 "process": { 00:17:33.463 "type": "rebuild", 00:17:33.463 "target": "spare", 00:17:33.463 "progress": { 00:17:33.463 "blocks": 2560, 00:17:33.463 "percent": 32 00:17:33.463 } 00:17:33.463 }, 00:17:33.463 "base_bdevs_list": [ 00:17:33.463 { 00:17:33.463 "name": "spare", 00:17:33.463 "uuid": "3d541ccd-1b36-55de-bb6f-e4dd53c74820", 00:17:33.463 "is_configured": true, 00:17:33.463 "data_offset": 256, 00:17:33.463 "data_size": 7936 00:17:33.463 }, 00:17:33.463 { 00:17:33.463 "name": "BaseBdev2", 00:17:33.463 "uuid": "3130c707-aa7c-5635-a972-0ec942990c00", 00:17:33.463 "is_configured": true, 00:17:33.463 "data_offset": 256, 00:17:33.463 "data_size": 7936 00:17:33.463 } 00:17:33.463 ] 00:17:33.463 }' 00:17:33.463 14:50:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:33.463 14:50:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:33.463 14:50:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:33.463 14:50:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:33.463 14:50:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:33.463 14:50:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:33.463 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:33.463 14:50:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:33.463 14:50:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:33.463 14:50:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:33.463 14:50:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=683 00:17:33.463 14:50:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:33.463 14:50:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:33.463 14:50:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:33.463 14:50:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:33.463 14:50:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:33.463 14:50:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:33.463 14:50:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.463 14:50:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.463 14:50:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.463 14:50:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.463 14:50:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.463 14:50:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:33.463 "name": "raid_bdev1", 00:17:33.463 "uuid": "4264f8d5-7d91-4467-a0a6-b8d0b7c6a18f", 00:17:33.463 "strip_size_kb": 0, 00:17:33.463 "state": "online", 00:17:33.463 "raid_level": "raid1", 00:17:33.463 "superblock": true, 00:17:33.463 "num_base_bdevs": 2, 00:17:33.463 "num_base_bdevs_discovered": 2, 00:17:33.463 "num_base_bdevs_operational": 2, 00:17:33.463 "process": { 00:17:33.463 "type": "rebuild", 00:17:33.463 "target": "spare", 00:17:33.463 "progress": { 00:17:33.463 "blocks": 2816, 00:17:33.463 "percent": 35 00:17:33.463 } 00:17:33.463 }, 00:17:33.463 "base_bdevs_list": [ 00:17:33.463 { 00:17:33.464 "name": "spare", 00:17:33.464 "uuid": "3d541ccd-1b36-55de-bb6f-e4dd53c74820", 00:17:33.464 "is_configured": true, 00:17:33.464 "data_offset": 256, 00:17:33.464 "data_size": 7936 00:17:33.464 }, 00:17:33.464 { 00:17:33.464 "name": "BaseBdev2", 00:17:33.464 "uuid": "3130c707-aa7c-5635-a972-0ec942990c00", 00:17:33.464 "is_configured": true, 00:17:33.464 "data_offset": 256, 00:17:33.464 "data_size": 7936 00:17:33.464 } 00:17:33.464 ] 00:17:33.464 }' 00:17:33.464 14:50:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:33.464 14:50:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:33.464 14:50:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:33.723 14:50:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:33.723 14:50:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:34.675 14:50:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:34.675 14:50:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:34.675 14:50:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:34.675 14:50:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:34.675 14:50:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:34.675 14:50:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:34.675 14:50:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.675 14:50:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.675 14:50:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.675 14:50:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.675 14:50:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.675 14:50:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:34.675 "name": "raid_bdev1", 00:17:34.675 "uuid": "4264f8d5-7d91-4467-a0a6-b8d0b7c6a18f", 00:17:34.675 "strip_size_kb": 0, 00:17:34.675 "state": "online", 00:17:34.675 "raid_level": "raid1", 00:17:34.675 "superblock": true, 00:17:34.675 "num_base_bdevs": 2, 00:17:34.675 "num_base_bdevs_discovered": 2, 00:17:34.675 "num_base_bdevs_operational": 2, 00:17:34.675 "process": { 00:17:34.675 "type": "rebuild", 00:17:34.675 "target": "spare", 00:17:34.675 "progress": { 00:17:34.675 "blocks": 5632, 00:17:34.675 "percent": 70 00:17:34.675 } 00:17:34.675 }, 00:17:34.675 "base_bdevs_list": [ 00:17:34.675 { 00:17:34.675 "name": "spare", 00:17:34.675 "uuid": "3d541ccd-1b36-55de-bb6f-e4dd53c74820", 00:17:34.675 "is_configured": true, 00:17:34.675 "data_offset": 256, 00:17:34.675 "data_size": 7936 00:17:34.675 }, 00:17:34.675 { 00:17:34.675 "name": "BaseBdev2", 00:17:34.675 "uuid": "3130c707-aa7c-5635-a972-0ec942990c00", 00:17:34.675 "is_configured": true, 00:17:34.675 "data_offset": 256, 00:17:34.675 "data_size": 7936 00:17:34.675 } 00:17:34.675 ] 00:17:34.675 }' 00:17:34.675 14:50:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:34.675 14:50:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:34.675 14:50:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:34.675 14:50:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:34.675 14:50:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:35.615 [2024-12-09 14:50:13.419565] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:35.615 [2024-12-09 14:50:13.419659] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:35.615 [2024-12-09 14:50:13.419793] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:35.875 14:50:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:35.875 14:50:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:35.875 14:50:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:35.875 14:50:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:35.875 14:50:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:35.875 14:50:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:35.875 14:50:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.875 14:50:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.875 14:50:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.875 14:50:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.875 14:50:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.875 14:50:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:35.875 "name": "raid_bdev1", 00:17:35.875 "uuid": "4264f8d5-7d91-4467-a0a6-b8d0b7c6a18f", 00:17:35.875 "strip_size_kb": 0, 00:17:35.875 "state": "online", 00:17:35.875 "raid_level": "raid1", 00:17:35.875 "superblock": true, 00:17:35.875 "num_base_bdevs": 2, 00:17:35.875 "num_base_bdevs_discovered": 2, 00:17:35.875 "num_base_bdevs_operational": 2, 00:17:35.875 "base_bdevs_list": [ 00:17:35.875 { 00:17:35.875 "name": "spare", 00:17:35.875 "uuid": "3d541ccd-1b36-55de-bb6f-e4dd53c74820", 00:17:35.875 "is_configured": true, 00:17:35.875 "data_offset": 256, 00:17:35.875 "data_size": 7936 00:17:35.875 }, 00:17:35.875 { 00:17:35.875 "name": "BaseBdev2", 00:17:35.875 "uuid": "3130c707-aa7c-5635-a972-0ec942990c00", 00:17:35.875 "is_configured": true, 00:17:35.875 "data_offset": 256, 00:17:35.875 "data_size": 7936 00:17:35.875 } 00:17:35.875 ] 00:17:35.875 }' 00:17:35.875 14:50:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:35.875 14:50:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:35.875 14:50:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:35.875 14:50:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:35.875 14:50:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:17:35.875 14:50:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:35.875 14:50:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:35.875 14:50:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:35.875 14:50:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:35.875 14:50:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:35.875 14:50:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.875 14:50:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.875 14:50:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.875 14:50:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.875 14:50:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.875 14:50:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:35.875 "name": "raid_bdev1", 00:17:35.875 "uuid": "4264f8d5-7d91-4467-a0a6-b8d0b7c6a18f", 00:17:35.875 "strip_size_kb": 0, 00:17:35.875 "state": "online", 00:17:35.875 "raid_level": "raid1", 00:17:35.875 "superblock": true, 00:17:35.875 "num_base_bdevs": 2, 00:17:35.875 "num_base_bdevs_discovered": 2, 00:17:35.875 "num_base_bdevs_operational": 2, 00:17:35.875 "base_bdevs_list": [ 00:17:35.875 { 00:17:35.875 "name": "spare", 00:17:35.875 "uuid": "3d541ccd-1b36-55de-bb6f-e4dd53c74820", 00:17:35.875 "is_configured": true, 00:17:35.875 "data_offset": 256, 00:17:35.875 "data_size": 7936 00:17:35.875 }, 00:17:35.875 { 00:17:35.875 "name": "BaseBdev2", 00:17:35.875 "uuid": "3130c707-aa7c-5635-a972-0ec942990c00", 00:17:35.875 "is_configured": true, 00:17:35.875 "data_offset": 256, 00:17:35.875 "data_size": 7936 00:17:35.875 } 00:17:35.875 ] 00:17:35.875 }' 00:17:35.875 14:50:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.135 14:50:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:36.135 14:50:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.135 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:36.135 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:36.135 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.135 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.135 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:36.135 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:36.135 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:36.135 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.135 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.135 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.135 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.135 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.135 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.135 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.135 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.135 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.135 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.135 "name": "raid_bdev1", 00:17:36.135 "uuid": "4264f8d5-7d91-4467-a0a6-b8d0b7c6a18f", 00:17:36.135 "strip_size_kb": 0, 00:17:36.135 "state": "online", 00:17:36.135 "raid_level": "raid1", 00:17:36.135 "superblock": true, 00:17:36.135 "num_base_bdevs": 2, 00:17:36.135 "num_base_bdevs_discovered": 2, 00:17:36.135 "num_base_bdevs_operational": 2, 00:17:36.135 "base_bdevs_list": [ 00:17:36.135 { 00:17:36.135 "name": "spare", 00:17:36.135 "uuid": "3d541ccd-1b36-55de-bb6f-e4dd53c74820", 00:17:36.135 "is_configured": true, 00:17:36.135 "data_offset": 256, 00:17:36.135 "data_size": 7936 00:17:36.135 }, 00:17:36.135 { 00:17:36.135 "name": "BaseBdev2", 00:17:36.135 "uuid": "3130c707-aa7c-5635-a972-0ec942990c00", 00:17:36.135 "is_configured": true, 00:17:36.135 "data_offset": 256, 00:17:36.135 "data_size": 7936 00:17:36.135 } 00:17:36.135 ] 00:17:36.135 }' 00:17:36.135 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.135 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.397 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:36.397 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.397 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.397 [2024-12-09 14:50:14.447349] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:36.397 [2024-12-09 14:50:14.447385] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:36.397 [2024-12-09 14:50:14.447466] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:36.397 [2024-12-09 14:50:14.447535] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:36.397 [2024-12-09 14:50:14.447545] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:36.397 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.397 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:17:36.397 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.397 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.397 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.397 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.397 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:36.397 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:36.397 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:36.397 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:36.397 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:36.397 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:36.397 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:36.397 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:36.397 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:36.397 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:36.397 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:36.397 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:36.397 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:36.657 /dev/nbd0 00:17:36.657 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:36.657 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:36.657 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:36.657 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:36.657 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:36.657 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:36.657 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:36.657 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:36.657 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:36.657 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:36.657 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:36.657 1+0 records in 00:17:36.657 1+0 records out 00:17:36.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368115 s, 11.1 MB/s 00:17:36.657 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:36.916 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:36.916 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:36.916 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:36.916 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:36.916 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:36.916 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:36.916 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:36.916 /dev/nbd1 00:17:36.916 14:50:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:36.916 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:36.916 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:36.916 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:36.916 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:36.916 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:36.916 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:36.916 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:36.916 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:36.916 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:36.916 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:36.916 1+0 records in 00:17:36.916 1+0 records out 00:17:36.916 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263517 s, 15.5 MB/s 00:17:36.916 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:36.916 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:36.916 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:36.916 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:36.916 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:36.916 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:36.916 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:36.916 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:37.178 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:37.178 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:37.178 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:37.178 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:37.178 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:37.178 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:37.178 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:37.442 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:37.442 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:37.442 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:37.442 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:37.442 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:37.442 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:37.442 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:37.442 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:37.442 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:37.442 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:37.702 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:37.702 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:37.702 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:37.702 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:37.702 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:37.702 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:37.702 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:37.702 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:37.702 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:37.702 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:37.702 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.702 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.702 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.702 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:37.702 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.702 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.702 [2024-12-09 14:50:15.656780] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:37.702 [2024-12-09 14:50:15.656841] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.702 [2024-12-09 14:50:15.656868] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:37.702 [2024-12-09 14:50:15.656877] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.703 [2024-12-09 14:50:15.659063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.703 [2024-12-09 14:50:15.659134] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:37.703 [2024-12-09 14:50:15.659259] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:37.703 [2024-12-09 14:50:15.659366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:37.703 [2024-12-09 14:50:15.659552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:37.703 spare 00:17:37.703 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.703 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:37.703 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.703 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.703 [2024-12-09 14:50:15.759516] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:37.703 [2024-12-09 14:50:15.759556] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:37.703 [2024-12-09 14:50:15.759891] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:37.703 [2024-12-09 14:50:15.760128] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:37.703 [2024-12-09 14:50:15.760145] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:37.703 [2024-12-09 14:50:15.760349] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.703 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.703 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:37.703 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.703 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.703 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.703 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.703 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:37.703 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.703 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.703 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.703 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.703 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.703 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.703 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.703 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.703 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.703 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.703 "name": "raid_bdev1", 00:17:37.703 "uuid": "4264f8d5-7d91-4467-a0a6-b8d0b7c6a18f", 00:17:37.703 "strip_size_kb": 0, 00:17:37.703 "state": "online", 00:17:37.703 "raid_level": "raid1", 00:17:37.703 "superblock": true, 00:17:37.703 "num_base_bdevs": 2, 00:17:37.703 "num_base_bdevs_discovered": 2, 00:17:37.703 "num_base_bdevs_operational": 2, 00:17:37.703 "base_bdevs_list": [ 00:17:37.703 { 00:17:37.703 "name": "spare", 00:17:37.703 "uuid": "3d541ccd-1b36-55de-bb6f-e4dd53c74820", 00:17:37.703 "is_configured": true, 00:17:37.703 "data_offset": 256, 00:17:37.703 "data_size": 7936 00:17:37.703 }, 00:17:37.703 { 00:17:37.703 "name": "BaseBdev2", 00:17:37.703 "uuid": "3130c707-aa7c-5635-a972-0ec942990c00", 00:17:37.703 "is_configured": true, 00:17:37.703 "data_offset": 256, 00:17:37.703 "data_size": 7936 00:17:37.703 } 00:17:37.703 ] 00:17:37.703 }' 00:17:37.703 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.703 14:50:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.272 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:38.272 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.272 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:38.272 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:38.272 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.272 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.272 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.272 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.272 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.272 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.272 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.272 "name": "raid_bdev1", 00:17:38.272 "uuid": "4264f8d5-7d91-4467-a0a6-b8d0b7c6a18f", 00:17:38.272 "strip_size_kb": 0, 00:17:38.272 "state": "online", 00:17:38.272 "raid_level": "raid1", 00:17:38.272 "superblock": true, 00:17:38.272 "num_base_bdevs": 2, 00:17:38.272 "num_base_bdevs_discovered": 2, 00:17:38.272 "num_base_bdevs_operational": 2, 00:17:38.272 "base_bdevs_list": [ 00:17:38.272 { 00:17:38.272 "name": "spare", 00:17:38.272 "uuid": "3d541ccd-1b36-55de-bb6f-e4dd53c74820", 00:17:38.272 "is_configured": true, 00:17:38.272 "data_offset": 256, 00:17:38.272 "data_size": 7936 00:17:38.272 }, 00:17:38.272 { 00:17:38.272 "name": "BaseBdev2", 00:17:38.272 "uuid": "3130c707-aa7c-5635-a972-0ec942990c00", 00:17:38.272 "is_configured": true, 00:17:38.272 "data_offset": 256, 00:17:38.272 "data_size": 7936 00:17:38.272 } 00:17:38.272 ] 00:17:38.272 }' 00:17:38.272 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.272 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:38.272 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.272 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:38.272 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.272 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:38.272 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.272 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.532 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.532 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:38.532 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:38.532 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.532 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.532 [2024-12-09 14:50:16.431586] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:38.532 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.532 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:38.532 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:38.532 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.532 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:38.532 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:38.532 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:38.532 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.532 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.532 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.532 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.532 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.532 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.532 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.532 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.532 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.532 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.532 "name": "raid_bdev1", 00:17:38.532 "uuid": "4264f8d5-7d91-4467-a0a6-b8d0b7c6a18f", 00:17:38.532 "strip_size_kb": 0, 00:17:38.532 "state": "online", 00:17:38.533 "raid_level": "raid1", 00:17:38.533 "superblock": true, 00:17:38.533 "num_base_bdevs": 2, 00:17:38.533 "num_base_bdevs_discovered": 1, 00:17:38.533 "num_base_bdevs_operational": 1, 00:17:38.533 "base_bdevs_list": [ 00:17:38.533 { 00:17:38.533 "name": null, 00:17:38.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.533 "is_configured": false, 00:17:38.533 "data_offset": 0, 00:17:38.533 "data_size": 7936 00:17:38.533 }, 00:17:38.533 { 00:17:38.533 "name": "BaseBdev2", 00:17:38.533 "uuid": "3130c707-aa7c-5635-a972-0ec942990c00", 00:17:38.533 "is_configured": true, 00:17:38.533 "data_offset": 256, 00:17:38.533 "data_size": 7936 00:17:38.533 } 00:17:38.533 ] 00:17:38.533 }' 00:17:38.533 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.533 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.102 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:39.102 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.102 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.102 [2024-12-09 14:50:16.922870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:39.102 [2024-12-09 14:50:16.923079] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:39.102 [2024-12-09 14:50:16.923097] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:39.102 [2024-12-09 14:50:16.923136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:39.102 [2024-12-09 14:50:16.939555] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:39.102 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.102 14:50:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:39.102 [2024-12-09 14:50:16.941469] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:40.038 14:50:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:40.038 14:50:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.038 14:50:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:40.038 14:50:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:40.038 14:50:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.038 14:50:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.038 14:50:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.038 14:50:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.038 14:50:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.038 14:50:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.038 14:50:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.038 "name": "raid_bdev1", 00:17:40.038 "uuid": "4264f8d5-7d91-4467-a0a6-b8d0b7c6a18f", 00:17:40.038 "strip_size_kb": 0, 00:17:40.038 "state": "online", 00:17:40.038 "raid_level": "raid1", 00:17:40.038 "superblock": true, 00:17:40.038 "num_base_bdevs": 2, 00:17:40.038 "num_base_bdevs_discovered": 2, 00:17:40.038 "num_base_bdevs_operational": 2, 00:17:40.039 "process": { 00:17:40.039 "type": "rebuild", 00:17:40.039 "target": "spare", 00:17:40.039 "progress": { 00:17:40.039 "blocks": 2560, 00:17:40.039 "percent": 32 00:17:40.039 } 00:17:40.039 }, 00:17:40.039 "base_bdevs_list": [ 00:17:40.039 { 00:17:40.039 "name": "spare", 00:17:40.039 "uuid": "3d541ccd-1b36-55de-bb6f-e4dd53c74820", 00:17:40.039 "is_configured": true, 00:17:40.039 "data_offset": 256, 00:17:40.039 "data_size": 7936 00:17:40.039 }, 00:17:40.039 { 00:17:40.039 "name": "BaseBdev2", 00:17:40.039 "uuid": "3130c707-aa7c-5635-a972-0ec942990c00", 00:17:40.039 "is_configured": true, 00:17:40.039 "data_offset": 256, 00:17:40.039 "data_size": 7936 00:17:40.039 } 00:17:40.039 ] 00:17:40.039 }' 00:17:40.039 14:50:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.039 14:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:40.039 14:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.039 14:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:40.039 14:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:40.039 14:50:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.039 14:50:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.039 [2024-12-09 14:50:18.096925] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:40.039 [2024-12-09 14:50:18.147166] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:40.039 [2024-12-09 14:50:18.147406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:40.039 [2024-12-09 14:50:18.147450] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:40.039 [2024-12-09 14:50:18.147475] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:40.299 14:50:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.299 14:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:40.299 14:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:40.299 14:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:40.299 14:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:40.299 14:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:40.299 14:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:40.299 14:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.299 14:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.299 14:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.299 14:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.299 14:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.299 14:50:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.299 14:50:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.299 14:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.299 14:50:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.299 14:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.299 "name": "raid_bdev1", 00:17:40.299 "uuid": "4264f8d5-7d91-4467-a0a6-b8d0b7c6a18f", 00:17:40.299 "strip_size_kb": 0, 00:17:40.299 "state": "online", 00:17:40.299 "raid_level": "raid1", 00:17:40.299 "superblock": true, 00:17:40.299 "num_base_bdevs": 2, 00:17:40.299 "num_base_bdevs_discovered": 1, 00:17:40.299 "num_base_bdevs_operational": 1, 00:17:40.299 "base_bdevs_list": [ 00:17:40.299 { 00:17:40.299 "name": null, 00:17:40.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.299 "is_configured": false, 00:17:40.299 "data_offset": 0, 00:17:40.299 "data_size": 7936 00:17:40.299 }, 00:17:40.299 { 00:17:40.299 "name": "BaseBdev2", 00:17:40.299 "uuid": "3130c707-aa7c-5635-a972-0ec942990c00", 00:17:40.299 "is_configured": true, 00:17:40.299 "data_offset": 256, 00:17:40.299 "data_size": 7936 00:17:40.299 } 00:17:40.299 ] 00:17:40.299 }' 00:17:40.299 14:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.299 14:50:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.559 14:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:40.559 14:50:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.559 14:50:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.559 [2024-12-09 14:50:18.631406] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:40.559 [2024-12-09 14:50:18.631522] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:40.559 [2024-12-09 14:50:18.631578] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:40.559 [2024-12-09 14:50:18.631631] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:40.559 [2024-12-09 14:50:18.632156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:40.559 [2024-12-09 14:50:18.632226] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:40.559 [2024-12-09 14:50:18.632363] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:40.559 [2024-12-09 14:50:18.632407] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:40.559 [2024-12-09 14:50:18.632451] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:40.559 [2024-12-09 14:50:18.632507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:40.559 [2024-12-09 14:50:18.648771] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:17:40.559 spare 00:17:40.559 14:50:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.559 [2024-12-09 14:50:18.650813] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:40.559 14:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:41.941 14:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:41.941 14:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.941 14:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:41.941 14:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:41.941 14:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.941 14:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.941 14:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.941 14:50:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.941 14:50:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.941 14:50:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.941 14:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.941 "name": "raid_bdev1", 00:17:41.941 "uuid": "4264f8d5-7d91-4467-a0a6-b8d0b7c6a18f", 00:17:41.941 "strip_size_kb": 0, 00:17:41.941 "state": "online", 00:17:41.941 "raid_level": "raid1", 00:17:41.941 "superblock": true, 00:17:41.941 "num_base_bdevs": 2, 00:17:41.941 "num_base_bdevs_discovered": 2, 00:17:41.941 "num_base_bdevs_operational": 2, 00:17:41.941 "process": { 00:17:41.941 "type": "rebuild", 00:17:41.941 "target": "spare", 00:17:41.941 "progress": { 00:17:41.941 "blocks": 2560, 00:17:41.941 "percent": 32 00:17:41.941 } 00:17:41.941 }, 00:17:41.941 "base_bdevs_list": [ 00:17:41.941 { 00:17:41.941 "name": "spare", 00:17:41.941 "uuid": "3d541ccd-1b36-55de-bb6f-e4dd53c74820", 00:17:41.941 "is_configured": true, 00:17:41.941 "data_offset": 256, 00:17:41.941 "data_size": 7936 00:17:41.941 }, 00:17:41.941 { 00:17:41.941 "name": "BaseBdev2", 00:17:41.941 "uuid": "3130c707-aa7c-5635-a972-0ec942990c00", 00:17:41.941 "is_configured": true, 00:17:41.941 "data_offset": 256, 00:17:41.941 "data_size": 7936 00:17:41.941 } 00:17:41.941 ] 00:17:41.941 }' 00:17:41.941 14:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.941 14:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:41.941 14:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.941 14:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:41.941 14:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:41.941 14:50:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.941 14:50:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.941 [2024-12-09 14:50:19.802294] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:41.941 [2024-12-09 14:50:19.856518] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:41.941 [2024-12-09 14:50:19.856624] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:41.941 [2024-12-09 14:50:19.856656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:41.941 [2024-12-09 14:50:19.856664] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:41.941 14:50:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.941 14:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:41.941 14:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.941 14:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:41.941 14:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:41.941 14:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:41.941 14:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:41.941 14:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.941 14:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.941 14:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.941 14:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.941 14:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.941 14:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.941 14:50:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.941 14:50:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.941 14:50:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.941 14:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.941 "name": "raid_bdev1", 00:17:41.941 "uuid": "4264f8d5-7d91-4467-a0a6-b8d0b7c6a18f", 00:17:41.941 "strip_size_kb": 0, 00:17:41.941 "state": "online", 00:17:41.941 "raid_level": "raid1", 00:17:41.941 "superblock": true, 00:17:41.941 "num_base_bdevs": 2, 00:17:41.941 "num_base_bdevs_discovered": 1, 00:17:41.941 "num_base_bdevs_operational": 1, 00:17:41.941 "base_bdevs_list": [ 00:17:41.941 { 00:17:41.941 "name": null, 00:17:41.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.941 "is_configured": false, 00:17:41.941 "data_offset": 0, 00:17:41.941 "data_size": 7936 00:17:41.941 }, 00:17:41.941 { 00:17:41.941 "name": "BaseBdev2", 00:17:41.941 "uuid": "3130c707-aa7c-5635-a972-0ec942990c00", 00:17:41.941 "is_configured": true, 00:17:41.941 "data_offset": 256, 00:17:41.941 "data_size": 7936 00:17:41.941 } 00:17:41.941 ] 00:17:41.941 }' 00:17:41.941 14:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.941 14:50:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.511 14:50:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:42.511 14:50:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.511 14:50:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:42.511 14:50:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:42.511 14:50:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.511 14:50:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.511 14:50:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.511 14:50:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.511 14:50:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.511 14:50:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.511 14:50:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.511 "name": "raid_bdev1", 00:17:42.511 "uuid": "4264f8d5-7d91-4467-a0a6-b8d0b7c6a18f", 00:17:42.511 "strip_size_kb": 0, 00:17:42.511 "state": "online", 00:17:42.511 "raid_level": "raid1", 00:17:42.511 "superblock": true, 00:17:42.511 "num_base_bdevs": 2, 00:17:42.511 "num_base_bdevs_discovered": 1, 00:17:42.511 "num_base_bdevs_operational": 1, 00:17:42.511 "base_bdevs_list": [ 00:17:42.511 { 00:17:42.511 "name": null, 00:17:42.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.511 "is_configured": false, 00:17:42.511 "data_offset": 0, 00:17:42.511 "data_size": 7936 00:17:42.511 }, 00:17:42.511 { 00:17:42.511 "name": "BaseBdev2", 00:17:42.511 "uuid": "3130c707-aa7c-5635-a972-0ec942990c00", 00:17:42.511 "is_configured": true, 00:17:42.511 "data_offset": 256, 00:17:42.511 "data_size": 7936 00:17:42.511 } 00:17:42.511 ] 00:17:42.511 }' 00:17:42.511 14:50:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.511 14:50:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:42.511 14:50:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.511 14:50:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:42.511 14:50:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:42.511 14:50:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.511 14:50:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.511 14:50:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.511 14:50:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:42.511 14:50:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.511 14:50:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.511 [2024-12-09 14:50:20.523535] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:42.511 [2024-12-09 14:50:20.523652] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.511 [2024-12-09 14:50:20.523688] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:42.511 [2024-12-09 14:50:20.523707] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.511 [2024-12-09 14:50:20.524164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.511 [2024-12-09 14:50:20.524183] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:42.511 [2024-12-09 14:50:20.524272] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:42.511 [2024-12-09 14:50:20.524287] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:42.511 [2024-12-09 14:50:20.524303] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:42.511 [2024-12-09 14:50:20.524314] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:42.511 BaseBdev1 00:17:42.511 14:50:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.511 14:50:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:43.455 14:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:43.455 14:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:43.455 14:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:43.455 14:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:43.455 14:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:43.455 14:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:43.455 14:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.455 14:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.455 14:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.455 14:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.455 14:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.455 14:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.455 14:50:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.455 14:50:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.455 14:50:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.723 14:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.723 "name": "raid_bdev1", 00:17:43.723 "uuid": "4264f8d5-7d91-4467-a0a6-b8d0b7c6a18f", 00:17:43.723 "strip_size_kb": 0, 00:17:43.723 "state": "online", 00:17:43.723 "raid_level": "raid1", 00:17:43.723 "superblock": true, 00:17:43.723 "num_base_bdevs": 2, 00:17:43.723 "num_base_bdevs_discovered": 1, 00:17:43.723 "num_base_bdevs_operational": 1, 00:17:43.723 "base_bdevs_list": [ 00:17:43.723 { 00:17:43.723 "name": null, 00:17:43.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.723 "is_configured": false, 00:17:43.723 "data_offset": 0, 00:17:43.723 "data_size": 7936 00:17:43.723 }, 00:17:43.723 { 00:17:43.723 "name": "BaseBdev2", 00:17:43.723 "uuid": "3130c707-aa7c-5635-a972-0ec942990c00", 00:17:43.723 "is_configured": true, 00:17:43.723 "data_offset": 256, 00:17:43.723 "data_size": 7936 00:17:43.723 } 00:17:43.723 ] 00:17:43.723 }' 00:17:43.723 14:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.723 14:50:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.982 14:50:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:43.982 14:50:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.982 14:50:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:43.982 14:50:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:43.982 14:50:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.982 14:50:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.982 14:50:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.982 14:50:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.982 14:50:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.982 14:50:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.982 14:50:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.982 "name": "raid_bdev1", 00:17:43.982 "uuid": "4264f8d5-7d91-4467-a0a6-b8d0b7c6a18f", 00:17:43.982 "strip_size_kb": 0, 00:17:43.982 "state": "online", 00:17:43.982 "raid_level": "raid1", 00:17:43.982 "superblock": true, 00:17:43.982 "num_base_bdevs": 2, 00:17:43.983 "num_base_bdevs_discovered": 1, 00:17:43.983 "num_base_bdevs_operational": 1, 00:17:43.983 "base_bdevs_list": [ 00:17:43.983 { 00:17:43.983 "name": null, 00:17:43.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.983 "is_configured": false, 00:17:43.983 "data_offset": 0, 00:17:43.983 "data_size": 7936 00:17:43.983 }, 00:17:43.983 { 00:17:43.983 "name": "BaseBdev2", 00:17:43.983 "uuid": "3130c707-aa7c-5635-a972-0ec942990c00", 00:17:43.983 "is_configured": true, 00:17:43.983 "data_offset": 256, 00:17:43.983 "data_size": 7936 00:17:43.983 } 00:17:43.983 ] 00:17:43.983 }' 00:17:43.983 14:50:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.242 14:50:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:44.242 14:50:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.242 14:50:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:44.242 14:50:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:44.242 14:50:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:17:44.242 14:50:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:44.242 14:50:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:44.242 14:50:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:44.242 14:50:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:44.242 14:50:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:44.242 14:50:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:44.242 14:50:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.242 14:50:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.242 [2024-12-09 14:50:22.172990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:44.242 [2024-12-09 14:50:22.173243] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:44.242 [2024-12-09 14:50:22.173313] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:44.242 request: 00:17:44.242 { 00:17:44.242 "base_bdev": "BaseBdev1", 00:17:44.242 "raid_bdev": "raid_bdev1", 00:17:44.242 "method": "bdev_raid_add_base_bdev", 00:17:44.242 "req_id": 1 00:17:44.242 } 00:17:44.242 Got JSON-RPC error response 00:17:44.242 response: 00:17:44.242 { 00:17:44.242 "code": -22, 00:17:44.242 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:44.242 } 00:17:44.242 14:50:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:44.242 14:50:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:17:44.242 14:50:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:44.242 14:50:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:44.242 14:50:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:44.242 14:50:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:45.182 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:45.182 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:45.182 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:45.182 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:45.182 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:45.182 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:45.182 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.182 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.182 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.182 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.182 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.182 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.182 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.182 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.182 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.182 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.182 "name": "raid_bdev1", 00:17:45.182 "uuid": "4264f8d5-7d91-4467-a0a6-b8d0b7c6a18f", 00:17:45.182 "strip_size_kb": 0, 00:17:45.182 "state": "online", 00:17:45.182 "raid_level": "raid1", 00:17:45.182 "superblock": true, 00:17:45.182 "num_base_bdevs": 2, 00:17:45.182 "num_base_bdevs_discovered": 1, 00:17:45.182 "num_base_bdevs_operational": 1, 00:17:45.182 "base_bdevs_list": [ 00:17:45.182 { 00:17:45.182 "name": null, 00:17:45.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.182 "is_configured": false, 00:17:45.182 "data_offset": 0, 00:17:45.182 "data_size": 7936 00:17:45.182 }, 00:17:45.182 { 00:17:45.182 "name": "BaseBdev2", 00:17:45.182 "uuid": "3130c707-aa7c-5635-a972-0ec942990c00", 00:17:45.182 "is_configured": true, 00:17:45.182 "data_offset": 256, 00:17:45.182 "data_size": 7936 00:17:45.182 } 00:17:45.182 ] 00:17:45.182 }' 00:17:45.182 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.182 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.752 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:45.752 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:45.752 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:45.752 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:45.752 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:45.752 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.752 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.753 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.753 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.753 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.753 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:45.753 "name": "raid_bdev1", 00:17:45.753 "uuid": "4264f8d5-7d91-4467-a0a6-b8d0b7c6a18f", 00:17:45.753 "strip_size_kb": 0, 00:17:45.753 "state": "online", 00:17:45.753 "raid_level": "raid1", 00:17:45.753 "superblock": true, 00:17:45.753 "num_base_bdevs": 2, 00:17:45.753 "num_base_bdevs_discovered": 1, 00:17:45.753 "num_base_bdevs_operational": 1, 00:17:45.753 "base_bdevs_list": [ 00:17:45.753 { 00:17:45.753 "name": null, 00:17:45.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.753 "is_configured": false, 00:17:45.753 "data_offset": 0, 00:17:45.753 "data_size": 7936 00:17:45.753 }, 00:17:45.753 { 00:17:45.753 "name": "BaseBdev2", 00:17:45.753 "uuid": "3130c707-aa7c-5635-a972-0ec942990c00", 00:17:45.753 "is_configured": true, 00:17:45.753 "data_offset": 256, 00:17:45.753 "data_size": 7936 00:17:45.753 } 00:17:45.753 ] 00:17:45.753 }' 00:17:45.753 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:45.753 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:45.753 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:45.753 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:45.753 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 87818 00:17:45.753 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 87818 ']' 00:17:45.753 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 87818 00:17:45.753 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:17:45.753 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:45.753 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87818 00:17:45.753 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:45.753 killing process with pid 87818 00:17:45.753 Received shutdown signal, test time was about 60.000000 seconds 00:17:45.753 00:17:45.753 Latency(us) 00:17:45.753 [2024-12-09T14:50:23.875Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.753 [2024-12-09T14:50:23.875Z] =================================================================================================================== 00:17:45.753 [2024-12-09T14:50:23.875Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:45.753 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:45.753 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87818' 00:17:45.753 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 87818 00:17:45.753 [2024-12-09 14:50:23.812675] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:45.753 [2024-12-09 14:50:23.812808] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:45.753 14:50:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 87818 00:17:45.753 [2024-12-09 14:50:23.812870] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:45.753 [2024-12-09 14:50:23.812883] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:46.013 [2024-12-09 14:50:24.111450] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:47.392 14:50:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:17:47.392 00:17:47.392 real 0m20.033s 00:17:47.392 user 0m26.262s 00:17:47.392 sys 0m2.566s 00:17:47.392 14:50:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:47.392 14:50:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.392 ************************************ 00:17:47.392 END TEST raid_rebuild_test_sb_4k 00:17:47.392 ************************************ 00:17:47.392 14:50:25 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:17:47.392 14:50:25 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:17:47.392 14:50:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:47.392 14:50:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:47.392 14:50:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:47.392 ************************************ 00:17:47.392 START TEST raid_state_function_test_sb_md_separate 00:17:47.392 ************************************ 00:17:47.392 14:50:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:47.392 14:50:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:47.392 14:50:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:47.392 14:50:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:47.392 14:50:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:47.392 14:50:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:47.392 14:50:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:47.392 14:50:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:47.392 14:50:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:47.392 14:50:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:47.392 14:50:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:47.392 14:50:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:47.392 14:50:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:47.392 14:50:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:47.392 14:50:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:47.392 14:50:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:47.392 14:50:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:47.392 14:50:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:47.392 14:50:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:47.392 14:50:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:47.392 14:50:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:47.392 14:50:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:47.392 14:50:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:47.392 14:50:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=88513 00:17:47.392 14:50:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:47.392 14:50:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88513' 00:17:47.392 Process raid pid: 88513 00:17:47.392 14:50:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 88513 00:17:47.392 14:50:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88513 ']' 00:17:47.392 14:50:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.392 14:50:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:47.392 14:50:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.392 14:50:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:47.392 14:50:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.392 [2024-12-09 14:50:25.374260] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:17:47.392 [2024-12-09 14:50:25.374373] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:47.651 [2024-12-09 14:50:25.549672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.651 [2024-12-09 14:50:25.663650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.911 [2024-12-09 14:50:25.864426] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:47.911 [2024-12-09 14:50:25.864470] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:48.169 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:48.169 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:48.169 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:48.169 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.169 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.169 [2024-12-09 14:50:26.226435] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:48.169 [2024-12-09 14:50:26.226497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:48.169 [2024-12-09 14:50:26.226508] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:48.169 [2024-12-09 14:50:26.226518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:48.169 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.169 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:48.169 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:48.169 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:48.169 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:48.169 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:48.169 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:48.169 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.169 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.169 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.169 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.169 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.169 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.169 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.169 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.169 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.169 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.169 "name": "Existed_Raid", 00:17:48.169 "uuid": "27b59adc-3c3e-41eb-8aac-4d1253aad8d3", 00:17:48.169 "strip_size_kb": 0, 00:17:48.169 "state": "configuring", 00:17:48.169 "raid_level": "raid1", 00:17:48.169 "superblock": true, 00:17:48.169 "num_base_bdevs": 2, 00:17:48.169 "num_base_bdevs_discovered": 0, 00:17:48.170 "num_base_bdevs_operational": 2, 00:17:48.170 "base_bdevs_list": [ 00:17:48.170 { 00:17:48.170 "name": "BaseBdev1", 00:17:48.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.170 "is_configured": false, 00:17:48.170 "data_offset": 0, 00:17:48.170 "data_size": 0 00:17:48.170 }, 00:17:48.170 { 00:17:48.170 "name": "BaseBdev2", 00:17:48.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.170 "is_configured": false, 00:17:48.170 "data_offset": 0, 00:17:48.170 "data_size": 0 00:17:48.170 } 00:17:48.170 ] 00:17:48.170 }' 00:17:48.170 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.170 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.736 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:48.736 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.736 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.736 [2024-12-09 14:50:26.677620] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:48.736 [2024-12-09 14:50:26.677728] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:48.736 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.736 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:48.736 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.736 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.736 [2024-12-09 14:50:26.689617] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:48.736 [2024-12-09 14:50:26.689713] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:48.736 [2024-12-09 14:50:26.689747] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:48.736 [2024-12-09 14:50:26.689774] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:48.737 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.737 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:17:48.737 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.737 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.737 [2024-12-09 14:50:26.737768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:48.737 BaseBdev1 00:17:48.737 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.737 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:48.737 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:48.737 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:48.737 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:48.737 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:48.737 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:48.737 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:48.737 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.737 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.737 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.737 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:48.737 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.737 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.737 [ 00:17:48.737 { 00:17:48.737 "name": "BaseBdev1", 00:17:48.737 "aliases": [ 00:17:48.737 "d7088f68-d22f-48d6-b6ad-6725e7407c8d" 00:17:48.737 ], 00:17:48.737 "product_name": "Malloc disk", 00:17:48.737 "block_size": 4096, 00:17:48.737 "num_blocks": 8192, 00:17:48.737 "uuid": "d7088f68-d22f-48d6-b6ad-6725e7407c8d", 00:17:48.737 "md_size": 32, 00:17:48.737 "md_interleave": false, 00:17:48.737 "dif_type": 0, 00:17:48.737 "assigned_rate_limits": { 00:17:48.737 "rw_ios_per_sec": 0, 00:17:48.737 "rw_mbytes_per_sec": 0, 00:17:48.737 "r_mbytes_per_sec": 0, 00:17:48.737 "w_mbytes_per_sec": 0 00:17:48.737 }, 00:17:48.737 "claimed": true, 00:17:48.737 "claim_type": "exclusive_write", 00:17:48.737 "zoned": false, 00:17:48.737 "supported_io_types": { 00:17:48.737 "read": true, 00:17:48.737 "write": true, 00:17:48.737 "unmap": true, 00:17:48.737 "flush": true, 00:17:48.737 "reset": true, 00:17:48.737 "nvme_admin": false, 00:17:48.737 "nvme_io": false, 00:17:48.737 "nvme_io_md": false, 00:17:48.737 "write_zeroes": true, 00:17:48.737 "zcopy": true, 00:17:48.737 "get_zone_info": false, 00:17:48.737 "zone_management": false, 00:17:48.737 "zone_append": false, 00:17:48.737 "compare": false, 00:17:48.737 "compare_and_write": false, 00:17:48.737 "abort": true, 00:17:48.737 "seek_hole": false, 00:17:48.737 "seek_data": false, 00:17:48.737 "copy": true, 00:17:48.737 "nvme_iov_md": false 00:17:48.737 }, 00:17:48.737 "memory_domains": [ 00:17:48.737 { 00:17:48.737 "dma_device_id": "system", 00:17:48.737 "dma_device_type": 1 00:17:48.737 }, 00:17:48.737 { 00:17:48.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.737 "dma_device_type": 2 00:17:48.737 } 00:17:48.737 ], 00:17:48.737 "driver_specific": {} 00:17:48.737 } 00:17:48.737 ] 00:17:48.737 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.737 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:48.737 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:48.737 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:48.737 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:48.737 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:48.737 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:48.737 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:48.737 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.737 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.737 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.737 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.737 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.737 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.737 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.737 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.737 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.737 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.737 "name": "Existed_Raid", 00:17:48.737 "uuid": "8c11334d-4334-4538-9e7a-73ce217484e9", 00:17:48.737 "strip_size_kb": 0, 00:17:48.737 "state": "configuring", 00:17:48.737 "raid_level": "raid1", 00:17:48.737 "superblock": true, 00:17:48.737 "num_base_bdevs": 2, 00:17:48.737 "num_base_bdevs_discovered": 1, 00:17:48.737 "num_base_bdevs_operational": 2, 00:17:48.737 "base_bdevs_list": [ 00:17:48.737 { 00:17:48.737 "name": "BaseBdev1", 00:17:48.737 "uuid": "d7088f68-d22f-48d6-b6ad-6725e7407c8d", 00:17:48.737 "is_configured": true, 00:17:48.737 "data_offset": 256, 00:17:48.737 "data_size": 7936 00:17:48.737 }, 00:17:48.737 { 00:17:48.737 "name": "BaseBdev2", 00:17:48.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.737 "is_configured": false, 00:17:48.737 "data_offset": 0, 00:17:48.737 "data_size": 0 00:17:48.737 } 00:17:48.737 ] 00:17:48.737 }' 00:17:48.737 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.737 14:50:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.305 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:49.305 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.305 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.305 [2024-12-09 14:50:27.245004] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:49.305 [2024-12-09 14:50:27.245139] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:49.305 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.305 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:49.305 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.305 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.305 [2024-12-09 14:50:27.257013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:49.305 [2024-12-09 14:50:27.258891] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:49.305 [2024-12-09 14:50:27.258970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:49.305 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.305 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:49.305 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:49.305 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:49.305 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:49.305 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:49.305 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:49.305 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:49.305 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:49.305 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.305 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.305 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.305 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.305 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.305 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.305 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.305 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.305 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.305 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.305 "name": "Existed_Raid", 00:17:49.305 "uuid": "b88f962e-7836-403b-9c17-63859984c3c2", 00:17:49.305 "strip_size_kb": 0, 00:17:49.305 "state": "configuring", 00:17:49.305 "raid_level": "raid1", 00:17:49.305 "superblock": true, 00:17:49.305 "num_base_bdevs": 2, 00:17:49.305 "num_base_bdevs_discovered": 1, 00:17:49.305 "num_base_bdevs_operational": 2, 00:17:49.305 "base_bdevs_list": [ 00:17:49.305 { 00:17:49.305 "name": "BaseBdev1", 00:17:49.305 "uuid": "d7088f68-d22f-48d6-b6ad-6725e7407c8d", 00:17:49.305 "is_configured": true, 00:17:49.305 "data_offset": 256, 00:17:49.305 "data_size": 7936 00:17:49.305 }, 00:17:49.305 { 00:17:49.305 "name": "BaseBdev2", 00:17:49.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.305 "is_configured": false, 00:17:49.305 "data_offset": 0, 00:17:49.305 "data_size": 0 00:17:49.305 } 00:17:49.305 ] 00:17:49.305 }' 00:17:49.305 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.305 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.875 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:17:49.875 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.875 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.875 [2024-12-09 14:50:27.756932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:49.875 [2024-12-09 14:50:27.757171] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:49.875 [2024-12-09 14:50:27.757189] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:49.875 [2024-12-09 14:50:27.757276] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:49.875 [2024-12-09 14:50:27.757407] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:49.875 [2024-12-09 14:50:27.757419] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:49.875 [2024-12-09 14:50:27.757523] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:49.875 BaseBdev2 00:17:49.875 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.875 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:49.875 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:49.875 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:49.875 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:49.875 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:49.875 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:49.875 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:49.875 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.875 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.875 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.875 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:49.875 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.875 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.875 [ 00:17:49.875 { 00:17:49.875 "name": "BaseBdev2", 00:17:49.875 "aliases": [ 00:17:49.875 "35198f84-f9c9-4d18-b864-388ce8dba6e1" 00:17:49.875 ], 00:17:49.875 "product_name": "Malloc disk", 00:17:49.875 "block_size": 4096, 00:17:49.875 "num_blocks": 8192, 00:17:49.875 "uuid": "35198f84-f9c9-4d18-b864-388ce8dba6e1", 00:17:49.875 "md_size": 32, 00:17:49.875 "md_interleave": false, 00:17:49.875 "dif_type": 0, 00:17:49.875 "assigned_rate_limits": { 00:17:49.875 "rw_ios_per_sec": 0, 00:17:49.875 "rw_mbytes_per_sec": 0, 00:17:49.875 "r_mbytes_per_sec": 0, 00:17:49.875 "w_mbytes_per_sec": 0 00:17:49.875 }, 00:17:49.875 "claimed": true, 00:17:49.875 "claim_type": "exclusive_write", 00:17:49.875 "zoned": false, 00:17:49.875 "supported_io_types": { 00:17:49.875 "read": true, 00:17:49.875 "write": true, 00:17:49.875 "unmap": true, 00:17:49.875 "flush": true, 00:17:49.875 "reset": true, 00:17:49.875 "nvme_admin": false, 00:17:49.875 "nvme_io": false, 00:17:49.875 "nvme_io_md": false, 00:17:49.875 "write_zeroes": true, 00:17:49.875 "zcopy": true, 00:17:49.875 "get_zone_info": false, 00:17:49.875 "zone_management": false, 00:17:49.875 "zone_append": false, 00:17:49.875 "compare": false, 00:17:49.875 "compare_and_write": false, 00:17:49.875 "abort": true, 00:17:49.875 "seek_hole": false, 00:17:49.875 "seek_data": false, 00:17:49.875 "copy": true, 00:17:49.875 "nvme_iov_md": false 00:17:49.875 }, 00:17:49.875 "memory_domains": [ 00:17:49.875 { 00:17:49.875 "dma_device_id": "system", 00:17:49.875 "dma_device_type": 1 00:17:49.875 }, 00:17:49.875 { 00:17:49.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.875 "dma_device_type": 2 00:17:49.875 } 00:17:49.875 ], 00:17:49.875 "driver_specific": {} 00:17:49.875 } 00:17:49.875 ] 00:17:49.875 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.875 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:49.875 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:49.875 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:49.875 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:49.875 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:49.875 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.875 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:49.875 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:49.875 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:49.875 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.875 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.875 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.876 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.876 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.876 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.876 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.876 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.876 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.876 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.876 "name": "Existed_Raid", 00:17:49.876 "uuid": "b88f962e-7836-403b-9c17-63859984c3c2", 00:17:49.876 "strip_size_kb": 0, 00:17:49.876 "state": "online", 00:17:49.876 "raid_level": "raid1", 00:17:49.876 "superblock": true, 00:17:49.876 "num_base_bdevs": 2, 00:17:49.876 "num_base_bdevs_discovered": 2, 00:17:49.876 "num_base_bdevs_operational": 2, 00:17:49.876 "base_bdevs_list": [ 00:17:49.876 { 00:17:49.876 "name": "BaseBdev1", 00:17:49.876 "uuid": "d7088f68-d22f-48d6-b6ad-6725e7407c8d", 00:17:49.876 "is_configured": true, 00:17:49.876 "data_offset": 256, 00:17:49.876 "data_size": 7936 00:17:49.876 }, 00:17:49.876 { 00:17:49.876 "name": "BaseBdev2", 00:17:49.876 "uuid": "35198f84-f9c9-4d18-b864-388ce8dba6e1", 00:17:49.876 "is_configured": true, 00:17:49.876 "data_offset": 256, 00:17:49.876 "data_size": 7936 00:17:49.876 } 00:17:49.876 ] 00:17:49.876 }' 00:17:49.876 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.876 14:50:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.136 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:50.136 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:50.136 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:50.136 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:50.136 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:50.136 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:50.136 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:50.136 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.136 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.136 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:50.396 [2024-12-09 14:50:28.256522] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:50.396 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.396 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:50.396 "name": "Existed_Raid", 00:17:50.396 "aliases": [ 00:17:50.396 "b88f962e-7836-403b-9c17-63859984c3c2" 00:17:50.396 ], 00:17:50.396 "product_name": "Raid Volume", 00:17:50.396 "block_size": 4096, 00:17:50.396 "num_blocks": 7936, 00:17:50.396 "uuid": "b88f962e-7836-403b-9c17-63859984c3c2", 00:17:50.396 "md_size": 32, 00:17:50.396 "md_interleave": false, 00:17:50.396 "dif_type": 0, 00:17:50.396 "assigned_rate_limits": { 00:17:50.396 "rw_ios_per_sec": 0, 00:17:50.396 "rw_mbytes_per_sec": 0, 00:17:50.396 "r_mbytes_per_sec": 0, 00:17:50.396 "w_mbytes_per_sec": 0 00:17:50.396 }, 00:17:50.396 "claimed": false, 00:17:50.396 "zoned": false, 00:17:50.396 "supported_io_types": { 00:17:50.396 "read": true, 00:17:50.396 "write": true, 00:17:50.396 "unmap": false, 00:17:50.396 "flush": false, 00:17:50.396 "reset": true, 00:17:50.396 "nvme_admin": false, 00:17:50.396 "nvme_io": false, 00:17:50.396 "nvme_io_md": false, 00:17:50.396 "write_zeroes": true, 00:17:50.396 "zcopy": false, 00:17:50.396 "get_zone_info": false, 00:17:50.396 "zone_management": false, 00:17:50.396 "zone_append": false, 00:17:50.396 "compare": false, 00:17:50.396 "compare_and_write": false, 00:17:50.396 "abort": false, 00:17:50.396 "seek_hole": false, 00:17:50.396 "seek_data": false, 00:17:50.396 "copy": false, 00:17:50.396 "nvme_iov_md": false 00:17:50.396 }, 00:17:50.396 "memory_domains": [ 00:17:50.397 { 00:17:50.397 "dma_device_id": "system", 00:17:50.397 "dma_device_type": 1 00:17:50.397 }, 00:17:50.397 { 00:17:50.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.397 "dma_device_type": 2 00:17:50.397 }, 00:17:50.397 { 00:17:50.397 "dma_device_id": "system", 00:17:50.397 "dma_device_type": 1 00:17:50.397 }, 00:17:50.397 { 00:17:50.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.397 "dma_device_type": 2 00:17:50.397 } 00:17:50.397 ], 00:17:50.397 "driver_specific": { 00:17:50.397 "raid": { 00:17:50.397 "uuid": "b88f962e-7836-403b-9c17-63859984c3c2", 00:17:50.397 "strip_size_kb": 0, 00:17:50.397 "state": "online", 00:17:50.397 "raid_level": "raid1", 00:17:50.397 "superblock": true, 00:17:50.397 "num_base_bdevs": 2, 00:17:50.397 "num_base_bdevs_discovered": 2, 00:17:50.397 "num_base_bdevs_operational": 2, 00:17:50.397 "base_bdevs_list": [ 00:17:50.397 { 00:17:50.397 "name": "BaseBdev1", 00:17:50.397 "uuid": "d7088f68-d22f-48d6-b6ad-6725e7407c8d", 00:17:50.397 "is_configured": true, 00:17:50.397 "data_offset": 256, 00:17:50.397 "data_size": 7936 00:17:50.397 }, 00:17:50.397 { 00:17:50.397 "name": "BaseBdev2", 00:17:50.397 "uuid": "35198f84-f9c9-4d18-b864-388ce8dba6e1", 00:17:50.397 "is_configured": true, 00:17:50.397 "data_offset": 256, 00:17:50.397 "data_size": 7936 00:17:50.397 } 00:17:50.397 ] 00:17:50.397 } 00:17:50.397 } 00:17:50.397 }' 00:17:50.397 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:50.397 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:50.397 BaseBdev2' 00:17:50.397 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.397 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:50.397 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:50.397 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:50.397 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.397 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.397 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.397 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.397 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:50.397 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:50.397 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:50.397 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:50.397 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.397 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.397 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.397 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.397 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:50.397 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:50.397 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:50.397 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.397 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.397 [2024-12-09 14:50:28.491828] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:50.657 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.657 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:50.657 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:50.657 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:50.657 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:50.657 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:50.657 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:50.657 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:50.657 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.657 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.657 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.657 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:50.657 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.657 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.657 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.657 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.657 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.657 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.657 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.657 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.657 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.657 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.657 "name": "Existed_Raid", 00:17:50.657 "uuid": "b88f962e-7836-403b-9c17-63859984c3c2", 00:17:50.657 "strip_size_kb": 0, 00:17:50.657 "state": "online", 00:17:50.657 "raid_level": "raid1", 00:17:50.657 "superblock": true, 00:17:50.657 "num_base_bdevs": 2, 00:17:50.657 "num_base_bdevs_discovered": 1, 00:17:50.657 "num_base_bdevs_operational": 1, 00:17:50.657 "base_bdevs_list": [ 00:17:50.657 { 00:17:50.657 "name": null, 00:17:50.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.657 "is_configured": false, 00:17:50.657 "data_offset": 0, 00:17:50.657 "data_size": 7936 00:17:50.657 }, 00:17:50.657 { 00:17:50.657 "name": "BaseBdev2", 00:17:50.657 "uuid": "35198f84-f9c9-4d18-b864-388ce8dba6e1", 00:17:50.657 "is_configured": true, 00:17:50.657 "data_offset": 256, 00:17:50.657 "data_size": 7936 00:17:50.657 } 00:17:50.657 ] 00:17:50.657 }' 00:17:50.657 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.657 14:50:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.917 14:50:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:50.917 14:50:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:50.917 14:50:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:50.917 14:50:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.917 14:50:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.917 14:50:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.176 14:50:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.177 14:50:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:51.177 14:50:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:51.177 14:50:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:51.177 14:50:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.177 14:50:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.177 [2024-12-09 14:50:29.069044] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:51.177 [2024-12-09 14:50:29.069156] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:51.177 [2024-12-09 14:50:29.178307] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:51.177 [2024-12-09 14:50:29.178356] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:51.177 [2024-12-09 14:50:29.178368] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:51.177 14:50:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.177 14:50:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:51.177 14:50:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:51.177 14:50:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.177 14:50:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.177 14:50:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:51.177 14:50:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.177 14:50:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.177 14:50:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:51.177 14:50:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:51.177 14:50:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:51.177 14:50:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 88513 00:17:51.177 14:50:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88513 ']' 00:17:51.177 14:50:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88513 00:17:51.177 14:50:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:51.177 14:50:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:51.177 14:50:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88513 00:17:51.177 14:50:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:51.177 14:50:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:51.177 14:50:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88513' 00:17:51.177 killing process with pid 88513 00:17:51.177 14:50:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88513 00:17:51.177 [2024-12-09 14:50:29.278014] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:51.177 14:50:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88513 00:17:51.177 [2024-12-09 14:50:29.294779] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:52.564 14:50:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:17:52.564 00:17:52.564 real 0m5.144s 00:17:52.564 user 0m7.406s 00:17:52.564 sys 0m0.867s 00:17:52.564 14:50:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:52.564 14:50:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.564 ************************************ 00:17:52.564 END TEST raid_state_function_test_sb_md_separate 00:17:52.564 ************************************ 00:17:52.564 14:50:30 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:17:52.564 14:50:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:52.564 14:50:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:52.564 14:50:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:52.564 ************************************ 00:17:52.564 START TEST raid_superblock_test_md_separate 00:17:52.564 ************************************ 00:17:52.564 14:50:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:52.564 14:50:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:52.564 14:50:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:52.564 14:50:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:52.564 14:50:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:52.564 14:50:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:52.564 14:50:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:52.564 14:50:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:52.564 14:50:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:52.564 14:50:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:52.564 14:50:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:52.564 14:50:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:52.564 14:50:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:52.564 14:50:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:52.564 14:50:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:52.564 14:50:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:52.564 14:50:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=88765 00:17:52.564 14:50:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:52.564 14:50:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 88765 00:17:52.564 14:50:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88765 ']' 00:17:52.564 14:50:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.564 14:50:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:52.564 14:50:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.564 14:50:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:52.564 14:50:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.564 [2024-12-09 14:50:30.587181] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:17:52.564 [2024-12-09 14:50:30.587405] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88765 ] 00:17:52.824 [2024-12-09 14:50:30.757473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.825 [2024-12-09 14:50:30.873204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.084 [2024-12-09 14:50:31.066380] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:53.084 [2024-12-09 14:50:31.066450] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:53.343 14:50:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:53.343 14:50:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:53.343 14:50:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:53.343 14:50:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:53.343 14:50:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:53.343 14:50:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:53.343 14:50:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:53.343 14:50:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:53.343 14:50:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:53.343 14:50:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:53.343 14:50:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:17:53.343 14:50:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.343 14:50:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.343 malloc1 00:17:53.343 14:50:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.343 14:50:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:53.343 14:50:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.603 14:50:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.603 [2024-12-09 14:50:31.469399] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:53.603 [2024-12-09 14:50:31.469492] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.603 [2024-12-09 14:50:31.469549] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:53.603 [2024-12-09 14:50:31.469590] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.603 [2024-12-09 14:50:31.471428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.603 [2024-12-09 14:50:31.471496] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:53.603 pt1 00:17:53.603 14:50:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.603 14:50:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:53.603 14:50:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:53.603 14:50:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:53.603 14:50:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:53.603 14:50:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:53.603 14:50:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:53.603 14:50:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:53.604 14:50:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:53.604 14:50:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:17:53.604 14:50:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.604 14:50:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.604 malloc2 00:17:53.604 14:50:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.604 14:50:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:53.604 14:50:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.604 14:50:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.604 [2024-12-09 14:50:31.527110] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:53.604 [2024-12-09 14:50:31.527164] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.604 [2024-12-09 14:50:31.527201] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:53.604 [2024-12-09 14:50:31.527216] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.604 [2024-12-09 14:50:31.529120] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.604 [2024-12-09 14:50:31.529157] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:53.604 pt2 00:17:53.604 14:50:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.604 14:50:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:53.604 14:50:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:53.604 14:50:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:53.604 14:50:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.604 14:50:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.604 [2024-12-09 14:50:31.539127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:53.604 [2024-12-09 14:50:31.541002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:53.604 [2024-12-09 14:50:31.541176] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:53.604 [2024-12-09 14:50:31.541190] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:53.604 [2024-12-09 14:50:31.541267] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:53.604 [2024-12-09 14:50:31.541400] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:53.604 [2024-12-09 14:50:31.541411] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:53.604 [2024-12-09 14:50:31.541516] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.604 14:50:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.604 14:50:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:53.604 14:50:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:53.604 14:50:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.604 14:50:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:53.604 14:50:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:53.604 14:50:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:53.604 14:50:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.604 14:50:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.604 14:50:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.604 14:50:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.604 14:50:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.604 14:50:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.604 14:50:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.604 14:50:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.604 14:50:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.604 14:50:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.604 "name": "raid_bdev1", 00:17:53.604 "uuid": "32eedcfc-82a4-454d-a24a-a6d953d26c98", 00:17:53.604 "strip_size_kb": 0, 00:17:53.604 "state": "online", 00:17:53.604 "raid_level": "raid1", 00:17:53.604 "superblock": true, 00:17:53.604 "num_base_bdevs": 2, 00:17:53.604 "num_base_bdevs_discovered": 2, 00:17:53.604 "num_base_bdevs_operational": 2, 00:17:53.604 "base_bdevs_list": [ 00:17:53.604 { 00:17:53.604 "name": "pt1", 00:17:53.604 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:53.604 "is_configured": true, 00:17:53.604 "data_offset": 256, 00:17:53.604 "data_size": 7936 00:17:53.604 }, 00:17:53.604 { 00:17:53.604 "name": "pt2", 00:17:53.604 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:53.604 "is_configured": true, 00:17:53.604 "data_offset": 256, 00:17:53.604 "data_size": 7936 00:17:53.604 } 00:17:53.604 ] 00:17:53.604 }' 00:17:53.604 14:50:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.604 14:50:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.172 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:54.172 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:54.172 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:54.172 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:54.172 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:54.172 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:54.172 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:54.172 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:54.172 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.172 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.172 [2024-12-09 14:50:32.038629] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:54.172 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.172 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:54.172 "name": "raid_bdev1", 00:17:54.172 "aliases": [ 00:17:54.172 "32eedcfc-82a4-454d-a24a-a6d953d26c98" 00:17:54.172 ], 00:17:54.172 "product_name": "Raid Volume", 00:17:54.172 "block_size": 4096, 00:17:54.172 "num_blocks": 7936, 00:17:54.172 "uuid": "32eedcfc-82a4-454d-a24a-a6d953d26c98", 00:17:54.172 "md_size": 32, 00:17:54.172 "md_interleave": false, 00:17:54.172 "dif_type": 0, 00:17:54.172 "assigned_rate_limits": { 00:17:54.172 "rw_ios_per_sec": 0, 00:17:54.172 "rw_mbytes_per_sec": 0, 00:17:54.172 "r_mbytes_per_sec": 0, 00:17:54.172 "w_mbytes_per_sec": 0 00:17:54.172 }, 00:17:54.172 "claimed": false, 00:17:54.172 "zoned": false, 00:17:54.172 "supported_io_types": { 00:17:54.172 "read": true, 00:17:54.172 "write": true, 00:17:54.172 "unmap": false, 00:17:54.172 "flush": false, 00:17:54.172 "reset": true, 00:17:54.172 "nvme_admin": false, 00:17:54.172 "nvme_io": false, 00:17:54.172 "nvme_io_md": false, 00:17:54.172 "write_zeroes": true, 00:17:54.172 "zcopy": false, 00:17:54.172 "get_zone_info": false, 00:17:54.172 "zone_management": false, 00:17:54.172 "zone_append": false, 00:17:54.172 "compare": false, 00:17:54.172 "compare_and_write": false, 00:17:54.172 "abort": false, 00:17:54.172 "seek_hole": false, 00:17:54.172 "seek_data": false, 00:17:54.172 "copy": false, 00:17:54.172 "nvme_iov_md": false 00:17:54.172 }, 00:17:54.172 "memory_domains": [ 00:17:54.172 { 00:17:54.172 "dma_device_id": "system", 00:17:54.172 "dma_device_type": 1 00:17:54.172 }, 00:17:54.172 { 00:17:54.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.172 "dma_device_type": 2 00:17:54.172 }, 00:17:54.172 { 00:17:54.172 "dma_device_id": "system", 00:17:54.172 "dma_device_type": 1 00:17:54.172 }, 00:17:54.172 { 00:17:54.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.172 "dma_device_type": 2 00:17:54.172 } 00:17:54.172 ], 00:17:54.172 "driver_specific": { 00:17:54.172 "raid": { 00:17:54.172 "uuid": "32eedcfc-82a4-454d-a24a-a6d953d26c98", 00:17:54.172 "strip_size_kb": 0, 00:17:54.172 "state": "online", 00:17:54.173 "raid_level": "raid1", 00:17:54.173 "superblock": true, 00:17:54.173 "num_base_bdevs": 2, 00:17:54.173 "num_base_bdevs_discovered": 2, 00:17:54.173 "num_base_bdevs_operational": 2, 00:17:54.173 "base_bdevs_list": [ 00:17:54.173 { 00:17:54.173 "name": "pt1", 00:17:54.173 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:54.173 "is_configured": true, 00:17:54.173 "data_offset": 256, 00:17:54.173 "data_size": 7936 00:17:54.173 }, 00:17:54.173 { 00:17:54.173 "name": "pt2", 00:17:54.173 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:54.173 "is_configured": true, 00:17:54.173 "data_offset": 256, 00:17:54.173 "data_size": 7936 00:17:54.173 } 00:17:54.173 ] 00:17:54.173 } 00:17:54.173 } 00:17:54.173 }' 00:17:54.173 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:54.173 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:54.173 pt2' 00:17:54.173 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:54.173 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:54.173 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:54.173 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:54.173 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:54.173 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.173 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.173 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.173 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:54.173 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:54.173 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:54.173 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:54.173 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:54.173 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.173 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.173 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.173 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:54.173 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:54.173 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:54.173 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:54.173 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.173 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.173 [2024-12-09 14:50:32.266147] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:54.173 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.433 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=32eedcfc-82a4-454d-a24a-a6d953d26c98 00:17:54.433 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 32eedcfc-82a4-454d-a24a-a6d953d26c98 ']' 00:17:54.433 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:54.433 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.433 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.433 [2024-12-09 14:50:32.309776] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:54.433 [2024-12-09 14:50:32.309840] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:54.433 [2024-12-09 14:50:32.309948] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:54.433 [2024-12-09 14:50:32.310035] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:54.433 [2024-12-09 14:50:32.310083] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:54.433 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.433 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.433 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:54.433 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.433 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.433 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.433 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:54.433 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:54.433 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:54.433 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:54.433 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.433 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.433 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.433 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:54.433 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:54.433 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.433 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.433 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.433 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:54.433 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:54.433 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.433 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.433 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.433 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:54.433 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:54.433 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:17:54.433 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:54.433 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:54.433 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.433 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:54.433 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.434 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:54.434 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.434 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.434 [2024-12-09 14:50:32.445573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:54.434 [2024-12-09 14:50:32.447523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:54.434 [2024-12-09 14:50:32.447681] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:54.434 [2024-12-09 14:50:32.447787] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:54.434 [2024-12-09 14:50:32.447841] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:54.434 [2024-12-09 14:50:32.447879] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:54.434 request: 00:17:54.434 { 00:17:54.434 "name": "raid_bdev1", 00:17:54.434 "raid_level": "raid1", 00:17:54.434 "base_bdevs": [ 00:17:54.434 "malloc1", 00:17:54.434 "malloc2" 00:17:54.434 ], 00:17:54.434 "superblock": false, 00:17:54.434 "method": "bdev_raid_create", 00:17:54.434 "req_id": 1 00:17:54.434 } 00:17:54.434 Got JSON-RPC error response 00:17:54.434 response: 00:17:54.434 { 00:17:54.434 "code": -17, 00:17:54.434 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:54.434 } 00:17:54.434 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:54.434 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:17:54.434 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:54.434 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:54.434 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:54.434 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.434 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:54.434 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.434 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.434 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.434 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:54.434 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:54.434 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:54.434 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.434 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.434 [2024-12-09 14:50:32.513450] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:54.434 [2024-12-09 14:50:32.513578] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:54.434 [2024-12-09 14:50:32.513630] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:54.434 [2024-12-09 14:50:32.513676] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:54.434 [2024-12-09 14:50:32.515685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:54.434 [2024-12-09 14:50:32.515758] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:54.434 [2024-12-09 14:50:32.515838] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:54.434 [2024-12-09 14:50:32.515922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:54.434 pt1 00:17:54.434 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.434 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:54.434 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:54.434 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:54.434 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:54.434 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:54.434 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:54.434 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.434 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.434 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.434 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.434 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.434 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.434 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.434 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.434 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.694 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.694 "name": "raid_bdev1", 00:17:54.694 "uuid": "32eedcfc-82a4-454d-a24a-a6d953d26c98", 00:17:54.694 "strip_size_kb": 0, 00:17:54.694 "state": "configuring", 00:17:54.694 "raid_level": "raid1", 00:17:54.694 "superblock": true, 00:17:54.694 "num_base_bdevs": 2, 00:17:54.694 "num_base_bdevs_discovered": 1, 00:17:54.694 "num_base_bdevs_operational": 2, 00:17:54.694 "base_bdevs_list": [ 00:17:54.694 { 00:17:54.694 "name": "pt1", 00:17:54.694 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:54.694 "is_configured": true, 00:17:54.694 "data_offset": 256, 00:17:54.694 "data_size": 7936 00:17:54.694 }, 00:17:54.694 { 00:17:54.694 "name": null, 00:17:54.694 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:54.694 "is_configured": false, 00:17:54.694 "data_offset": 256, 00:17:54.694 "data_size": 7936 00:17:54.694 } 00:17:54.694 ] 00:17:54.694 }' 00:17:54.694 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.694 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.954 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:54.954 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:54.954 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:54.954 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:54.954 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.954 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.954 [2024-12-09 14:50:32.916747] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:54.954 [2024-12-09 14:50:32.916831] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:54.954 [2024-12-09 14:50:32.916855] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:54.954 [2024-12-09 14:50:32.916865] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:54.954 [2024-12-09 14:50:32.917101] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:54.954 [2024-12-09 14:50:32.917117] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:54.954 [2024-12-09 14:50:32.917170] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:54.954 [2024-12-09 14:50:32.917192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:54.954 [2024-12-09 14:50:32.917328] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:54.954 [2024-12-09 14:50:32.917342] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:54.954 [2024-12-09 14:50:32.917419] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:54.954 [2024-12-09 14:50:32.917530] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:54.954 [2024-12-09 14:50:32.917538] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:54.954 [2024-12-09 14:50:32.917653] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:54.954 pt2 00:17:54.954 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.954 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:54.954 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:54.954 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:54.954 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:54.954 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:54.954 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:54.954 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:54.954 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:54.954 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.954 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.954 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.954 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.954 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.954 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.954 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.954 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.954 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.954 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.954 "name": "raid_bdev1", 00:17:54.954 "uuid": "32eedcfc-82a4-454d-a24a-a6d953d26c98", 00:17:54.954 "strip_size_kb": 0, 00:17:54.954 "state": "online", 00:17:54.954 "raid_level": "raid1", 00:17:54.954 "superblock": true, 00:17:54.954 "num_base_bdevs": 2, 00:17:54.954 "num_base_bdevs_discovered": 2, 00:17:54.954 "num_base_bdevs_operational": 2, 00:17:54.954 "base_bdevs_list": [ 00:17:54.954 { 00:17:54.954 "name": "pt1", 00:17:54.954 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:54.954 "is_configured": true, 00:17:54.955 "data_offset": 256, 00:17:54.955 "data_size": 7936 00:17:54.955 }, 00:17:54.955 { 00:17:54.955 "name": "pt2", 00:17:54.955 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:54.955 "is_configured": true, 00:17:54.955 "data_offset": 256, 00:17:54.955 "data_size": 7936 00:17:54.955 } 00:17:54.955 ] 00:17:54.955 }' 00:17:54.955 14:50:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.955 14:50:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.524 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:55.524 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:55.524 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:55.524 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:55.524 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:55.524 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:55.524 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:55.524 14:50:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.524 14:50:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.524 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:55.524 [2024-12-09 14:50:33.396206] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:55.524 14:50:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.524 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:55.524 "name": "raid_bdev1", 00:17:55.524 "aliases": [ 00:17:55.524 "32eedcfc-82a4-454d-a24a-a6d953d26c98" 00:17:55.524 ], 00:17:55.524 "product_name": "Raid Volume", 00:17:55.524 "block_size": 4096, 00:17:55.524 "num_blocks": 7936, 00:17:55.524 "uuid": "32eedcfc-82a4-454d-a24a-a6d953d26c98", 00:17:55.524 "md_size": 32, 00:17:55.524 "md_interleave": false, 00:17:55.524 "dif_type": 0, 00:17:55.524 "assigned_rate_limits": { 00:17:55.524 "rw_ios_per_sec": 0, 00:17:55.524 "rw_mbytes_per_sec": 0, 00:17:55.524 "r_mbytes_per_sec": 0, 00:17:55.524 "w_mbytes_per_sec": 0 00:17:55.524 }, 00:17:55.524 "claimed": false, 00:17:55.525 "zoned": false, 00:17:55.525 "supported_io_types": { 00:17:55.525 "read": true, 00:17:55.525 "write": true, 00:17:55.525 "unmap": false, 00:17:55.525 "flush": false, 00:17:55.525 "reset": true, 00:17:55.525 "nvme_admin": false, 00:17:55.525 "nvme_io": false, 00:17:55.525 "nvme_io_md": false, 00:17:55.525 "write_zeroes": true, 00:17:55.525 "zcopy": false, 00:17:55.525 "get_zone_info": false, 00:17:55.525 "zone_management": false, 00:17:55.525 "zone_append": false, 00:17:55.525 "compare": false, 00:17:55.525 "compare_and_write": false, 00:17:55.525 "abort": false, 00:17:55.525 "seek_hole": false, 00:17:55.525 "seek_data": false, 00:17:55.525 "copy": false, 00:17:55.525 "nvme_iov_md": false 00:17:55.525 }, 00:17:55.525 "memory_domains": [ 00:17:55.525 { 00:17:55.525 "dma_device_id": "system", 00:17:55.525 "dma_device_type": 1 00:17:55.525 }, 00:17:55.525 { 00:17:55.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.525 "dma_device_type": 2 00:17:55.525 }, 00:17:55.525 { 00:17:55.525 "dma_device_id": "system", 00:17:55.525 "dma_device_type": 1 00:17:55.525 }, 00:17:55.525 { 00:17:55.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.525 "dma_device_type": 2 00:17:55.525 } 00:17:55.525 ], 00:17:55.525 "driver_specific": { 00:17:55.525 "raid": { 00:17:55.525 "uuid": "32eedcfc-82a4-454d-a24a-a6d953d26c98", 00:17:55.525 "strip_size_kb": 0, 00:17:55.525 "state": "online", 00:17:55.525 "raid_level": "raid1", 00:17:55.525 "superblock": true, 00:17:55.525 "num_base_bdevs": 2, 00:17:55.525 "num_base_bdevs_discovered": 2, 00:17:55.525 "num_base_bdevs_operational": 2, 00:17:55.525 "base_bdevs_list": [ 00:17:55.525 { 00:17:55.525 "name": "pt1", 00:17:55.525 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:55.525 "is_configured": true, 00:17:55.525 "data_offset": 256, 00:17:55.525 "data_size": 7936 00:17:55.525 }, 00:17:55.525 { 00:17:55.525 "name": "pt2", 00:17:55.525 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:55.525 "is_configured": true, 00:17:55.525 "data_offset": 256, 00:17:55.525 "data_size": 7936 00:17:55.525 } 00:17:55.525 ] 00:17:55.525 } 00:17:55.525 } 00:17:55.525 }' 00:17:55.525 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:55.525 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:55.525 pt2' 00:17:55.525 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:55.525 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:55.525 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:55.525 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:55.525 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:55.525 14:50:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.525 14:50:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.525 14:50:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.525 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:55.525 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:55.525 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:55.525 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:55.525 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:55.525 14:50:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.525 14:50:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.525 14:50:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.525 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:55.525 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:55.525 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:55.525 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:55.525 14:50:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.525 14:50:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.525 [2024-12-09 14:50:33.631802] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:55.785 14:50:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.785 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 32eedcfc-82a4-454d-a24a-a6d953d26c98 '!=' 32eedcfc-82a4-454d-a24a-a6d953d26c98 ']' 00:17:55.785 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:55.785 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:55.785 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:55.785 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:55.785 14:50:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.785 14:50:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.785 [2024-12-09 14:50:33.671487] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:55.785 14:50:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.785 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:55.785 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:55.785 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:55.785 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:55.785 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:55.785 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:55.785 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.785 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.785 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.785 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.785 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.785 14:50:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.785 14:50:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.785 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.785 14:50:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.785 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.785 "name": "raid_bdev1", 00:17:55.785 "uuid": "32eedcfc-82a4-454d-a24a-a6d953d26c98", 00:17:55.785 "strip_size_kb": 0, 00:17:55.785 "state": "online", 00:17:55.785 "raid_level": "raid1", 00:17:55.785 "superblock": true, 00:17:55.785 "num_base_bdevs": 2, 00:17:55.785 "num_base_bdevs_discovered": 1, 00:17:55.785 "num_base_bdevs_operational": 1, 00:17:55.785 "base_bdevs_list": [ 00:17:55.785 { 00:17:55.785 "name": null, 00:17:55.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.785 "is_configured": false, 00:17:55.785 "data_offset": 0, 00:17:55.785 "data_size": 7936 00:17:55.785 }, 00:17:55.785 { 00:17:55.785 "name": "pt2", 00:17:55.785 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:55.785 "is_configured": true, 00:17:55.785 "data_offset": 256, 00:17:55.785 "data_size": 7936 00:17:55.785 } 00:17:55.785 ] 00:17:55.785 }' 00:17:55.785 14:50:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.785 14:50:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.355 [2024-12-09 14:50:34.174690] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:56.355 [2024-12-09 14:50:34.174781] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:56.355 [2024-12-09 14:50:34.174905] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:56.355 [2024-12-09 14:50:34.174977] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:56.355 [2024-12-09 14:50:34.175034] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.355 [2024-12-09 14:50:34.246534] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:56.355 [2024-12-09 14:50:34.246600] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.355 [2024-12-09 14:50:34.246618] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:56.355 [2024-12-09 14:50:34.246628] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.355 [2024-12-09 14:50:34.248709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.355 [2024-12-09 14:50:34.248784] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:56.355 [2024-12-09 14:50:34.248874] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:56.355 [2024-12-09 14:50:34.248947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:56.355 [2024-12-09 14:50:34.249072] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:56.355 [2024-12-09 14:50:34.249110] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:56.355 [2024-12-09 14:50:34.249205] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:56.355 [2024-12-09 14:50:34.249346] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:56.355 [2024-12-09 14:50:34.249380] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:56.355 [2024-12-09 14:50:34.249508] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.355 pt2 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.355 "name": "raid_bdev1", 00:17:56.355 "uuid": "32eedcfc-82a4-454d-a24a-a6d953d26c98", 00:17:56.355 "strip_size_kb": 0, 00:17:56.355 "state": "online", 00:17:56.355 "raid_level": "raid1", 00:17:56.355 "superblock": true, 00:17:56.355 "num_base_bdevs": 2, 00:17:56.355 "num_base_bdevs_discovered": 1, 00:17:56.355 "num_base_bdevs_operational": 1, 00:17:56.355 "base_bdevs_list": [ 00:17:56.355 { 00:17:56.355 "name": null, 00:17:56.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.355 "is_configured": false, 00:17:56.355 "data_offset": 256, 00:17:56.355 "data_size": 7936 00:17:56.355 }, 00:17:56.355 { 00:17:56.355 "name": "pt2", 00:17:56.355 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:56.355 "is_configured": true, 00:17:56.355 "data_offset": 256, 00:17:56.355 "data_size": 7936 00:17:56.355 } 00:17:56.355 ] 00:17:56.355 }' 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.355 14:50:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.615 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:56.615 14:50:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.615 14:50:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.615 [2024-12-09 14:50:34.709730] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:56.615 [2024-12-09 14:50:34.709763] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:56.615 [2024-12-09 14:50:34.709845] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:56.615 [2024-12-09 14:50:34.709902] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:56.615 [2024-12-09 14:50:34.709914] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:56.615 14:50:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.615 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.615 14:50:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.615 14:50:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.615 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:56.615 14:50:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.876 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:56.876 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:56.876 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:56.876 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:56.876 14:50:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.876 14:50:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.876 [2024-12-09 14:50:34.773676] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:56.876 [2024-12-09 14:50:34.773744] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.876 [2024-12-09 14:50:34.773766] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:56.876 [2024-12-09 14:50:34.773777] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.876 [2024-12-09 14:50:34.775965] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.876 [2024-12-09 14:50:34.776004] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:56.876 [2024-12-09 14:50:34.776067] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:56.876 [2024-12-09 14:50:34.776123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:56.876 [2024-12-09 14:50:34.776255] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:56.876 [2024-12-09 14:50:34.776266] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:56.876 [2024-12-09 14:50:34.776288] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:56.876 [2024-12-09 14:50:34.776383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:56.876 [2024-12-09 14:50:34.776482] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:56.876 [2024-12-09 14:50:34.776497] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:56.876 [2024-12-09 14:50:34.776586] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:56.876 [2024-12-09 14:50:34.776726] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:56.876 [2024-12-09 14:50:34.776737] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:56.876 [2024-12-09 14:50:34.776841] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.876 pt1 00:17:56.876 14:50:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.876 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:56.876 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:56.876 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.876 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.876 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:56.876 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:56.876 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:56.876 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.876 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.876 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.876 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.876 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.876 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.876 14:50:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.876 14:50:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.876 14:50:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.876 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.876 "name": "raid_bdev1", 00:17:56.876 "uuid": "32eedcfc-82a4-454d-a24a-a6d953d26c98", 00:17:56.876 "strip_size_kb": 0, 00:17:56.876 "state": "online", 00:17:56.876 "raid_level": "raid1", 00:17:56.876 "superblock": true, 00:17:56.876 "num_base_bdevs": 2, 00:17:56.876 "num_base_bdevs_discovered": 1, 00:17:56.876 "num_base_bdevs_operational": 1, 00:17:56.876 "base_bdevs_list": [ 00:17:56.876 { 00:17:56.876 "name": null, 00:17:56.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.876 "is_configured": false, 00:17:56.876 "data_offset": 256, 00:17:56.876 "data_size": 7936 00:17:56.876 }, 00:17:56.876 { 00:17:56.876 "name": "pt2", 00:17:56.876 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:56.876 "is_configured": true, 00:17:56.876 "data_offset": 256, 00:17:56.876 "data_size": 7936 00:17:56.876 } 00:17:56.877 ] 00:17:56.877 }' 00:17:56.877 14:50:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.877 14:50:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.137 14:50:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:57.137 14:50:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.137 14:50:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:57.137 14:50:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.137 14:50:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.397 14:50:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:57.397 14:50:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:57.397 14:50:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:57.397 14:50:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.397 14:50:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.397 [2024-12-09 14:50:35.285008] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:57.397 14:50:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.397 14:50:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 32eedcfc-82a4-454d-a24a-a6d953d26c98 '!=' 32eedcfc-82a4-454d-a24a-a6d953d26c98 ']' 00:17:57.397 14:50:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 88765 00:17:57.397 14:50:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88765 ']' 00:17:57.397 14:50:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 88765 00:17:57.397 14:50:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:57.397 14:50:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:57.397 14:50:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88765 00:17:57.397 14:50:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:57.397 14:50:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:57.397 14:50:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88765' 00:17:57.397 killing process with pid 88765 00:17:57.397 14:50:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 88765 00:17:57.397 [2024-12-09 14:50:35.367410] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:57.397 [2024-12-09 14:50:35.367512] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:57.397 [2024-12-09 14:50:35.367565] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:57.398 [2024-12-09 14:50:35.367601] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:57.398 14:50:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 88765 00:17:57.658 [2024-12-09 14:50:35.582461] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:58.597 14:50:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:17:58.597 00:17:58.597 real 0m6.192s 00:17:58.597 user 0m9.415s 00:17:58.597 sys 0m1.135s 00:17:58.597 14:50:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:58.597 14:50:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.597 ************************************ 00:17:58.597 END TEST raid_superblock_test_md_separate 00:17:58.597 ************************************ 00:17:58.856 14:50:36 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:17:58.856 14:50:36 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:17:58.856 14:50:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:58.856 14:50:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:58.856 14:50:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:58.856 ************************************ 00:17:58.856 START TEST raid_rebuild_test_sb_md_separate 00:17:58.856 ************************************ 00:17:58.856 14:50:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:58.856 14:50:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:58.856 14:50:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:58.856 14:50:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:58.856 14:50:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:58.856 14:50:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:58.856 14:50:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:58.856 14:50:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:58.856 14:50:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:58.856 14:50:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:58.856 14:50:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:58.856 14:50:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:58.856 14:50:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:58.856 14:50:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:58.856 14:50:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:58.856 14:50:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:58.856 14:50:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:58.856 14:50:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:58.856 14:50:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:58.856 14:50:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:58.856 14:50:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:58.857 14:50:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:58.857 14:50:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:58.857 14:50:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:58.857 14:50:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:58.857 14:50:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=89088 00:17:58.857 14:50:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:58.857 14:50:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 89088 00:17:58.857 14:50:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 89088 ']' 00:17:58.857 14:50:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.857 14:50:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:58.857 14:50:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.857 14:50:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:58.857 14:50:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.857 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:58.857 Zero copy mechanism will not be used. 00:17:58.857 [2024-12-09 14:50:36.862403] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:17:58.857 [2024-12-09 14:50:36.862536] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89088 ] 00:17:59.117 [2024-12-09 14:50:37.036813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.117 [2024-12-09 14:50:37.148442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.377 [2024-12-09 14:50:37.338076] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:59.377 [2024-12-09 14:50:37.338105] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:59.637 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:59.637 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:59.637 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:59.637 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:17:59.637 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.637 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.637 BaseBdev1_malloc 00:17:59.637 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.637 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:59.637 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.637 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.637 [2024-12-09 14:50:37.737048] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:59.637 [2024-12-09 14:50:37.737107] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:59.637 [2024-12-09 14:50:37.737144] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:59.637 [2024-12-09 14:50:37.737155] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:59.637 [2024-12-09 14:50:37.738991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:59.637 [2024-12-09 14:50:37.739102] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:59.637 BaseBdev1 00:17:59.637 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.637 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:59.637 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:17:59.637 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.637 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.897 BaseBdev2_malloc 00:17:59.897 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.897 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:59.897 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.897 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.897 [2024-12-09 14:50:37.792223] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:59.897 [2024-12-09 14:50:37.792303] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:59.897 [2024-12-09 14:50:37.792325] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:59.897 [2024-12-09 14:50:37.792338] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:59.897 [2024-12-09 14:50:37.794309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:59.897 [2024-12-09 14:50:37.794349] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:59.898 BaseBdev2 00:17:59.898 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.898 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:17:59.898 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.898 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.898 spare_malloc 00:17:59.898 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.898 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:59.898 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.898 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.898 spare_delay 00:17:59.898 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.898 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:59.898 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.898 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.898 [2024-12-09 14:50:37.866348] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:59.898 [2024-12-09 14:50:37.866402] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:59.898 [2024-12-09 14:50:37.866422] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:59.898 [2024-12-09 14:50:37.866433] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:59.898 [2024-12-09 14:50:37.868269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:59.898 [2024-12-09 14:50:37.868311] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:59.898 spare 00:17:59.898 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.898 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:59.898 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.898 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.898 [2024-12-09 14:50:37.878366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:59.898 [2024-12-09 14:50:37.880167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:59.898 [2024-12-09 14:50:37.880344] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:59.898 [2024-12-09 14:50:37.880359] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:59.898 [2024-12-09 14:50:37.880429] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:59.898 [2024-12-09 14:50:37.880550] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:59.898 [2024-12-09 14:50:37.880559] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:59.898 [2024-12-09 14:50:37.880679] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:59.898 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.898 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:59.898 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:59.898 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.898 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:59.898 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:59.898 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:59.898 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.898 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.898 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.898 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.898 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.898 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.898 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.898 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.898 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.898 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.898 "name": "raid_bdev1", 00:17:59.898 "uuid": "a6828fe5-9d3f-443d-bc6d-9d200f99331c", 00:17:59.898 "strip_size_kb": 0, 00:17:59.898 "state": "online", 00:17:59.898 "raid_level": "raid1", 00:17:59.898 "superblock": true, 00:17:59.898 "num_base_bdevs": 2, 00:17:59.898 "num_base_bdevs_discovered": 2, 00:17:59.898 "num_base_bdevs_operational": 2, 00:17:59.898 "base_bdevs_list": [ 00:17:59.898 { 00:17:59.898 "name": "BaseBdev1", 00:17:59.898 "uuid": "baf9004f-9690-5e62-9cbc-ff7ef0bb2730", 00:17:59.898 "is_configured": true, 00:17:59.898 "data_offset": 256, 00:17:59.898 "data_size": 7936 00:17:59.898 }, 00:17:59.898 { 00:17:59.898 "name": "BaseBdev2", 00:17:59.898 "uuid": "8b5daccd-91ce-55e7-a13e-e32d3a550344", 00:17:59.898 "is_configured": true, 00:17:59.898 "data_offset": 256, 00:17:59.898 "data_size": 7936 00:17:59.898 } 00:17:59.898 ] 00:17:59.898 }' 00:17:59.898 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.898 14:50:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.468 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:00.468 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.468 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.468 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:00.468 [2024-12-09 14:50:38.361889] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:00.468 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.468 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:00.468 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.468 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.468 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:00.468 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.468 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.468 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:00.468 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:00.468 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:00.468 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:00.468 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:00.468 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:00.468 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:00.468 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:00.468 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:00.468 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:00.468 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:00.468 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:00.468 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:00.468 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:00.728 [2024-12-09 14:50:38.641182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:00.728 /dev/nbd0 00:18:00.728 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:00.728 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:00.728 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:00.728 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:00.728 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:00.728 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:00.728 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:00.728 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:00.728 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:00.728 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:00.728 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:00.728 1+0 records in 00:18:00.728 1+0 records out 00:18:00.728 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000594612 s, 6.9 MB/s 00:18:00.728 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:00.728 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:00.728 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:00.728 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:00.728 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:00.728 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:00.728 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:00.728 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:00.728 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:00.728 14:50:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:01.298 7936+0 records in 00:18:01.298 7936+0 records out 00:18:01.298 32505856 bytes (33 MB, 31 MiB) copied, 0.6254 s, 52.0 MB/s 00:18:01.299 14:50:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:01.299 14:50:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:01.299 14:50:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:01.299 14:50:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:01.299 14:50:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:01.299 14:50:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:01.299 14:50:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:01.564 14:50:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:01.564 [2024-12-09 14:50:39.550714] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:01.564 14:50:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:01.564 14:50:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:01.564 14:50:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:01.564 14:50:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:01.564 14:50:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:01.564 14:50:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:01.564 14:50:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:01.564 14:50:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:01.564 14:50:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.564 14:50:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.564 [2024-12-09 14:50:39.566808] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:01.564 14:50:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.564 14:50:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:01.564 14:50:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:01.564 14:50:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:01.564 14:50:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:01.564 14:50:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:01.564 14:50:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:01.564 14:50:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.564 14:50:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.564 14:50:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.565 14:50:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.565 14:50:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.565 14:50:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.565 14:50:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.565 14:50:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.565 14:50:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.565 14:50:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.565 "name": "raid_bdev1", 00:18:01.565 "uuid": "a6828fe5-9d3f-443d-bc6d-9d200f99331c", 00:18:01.565 "strip_size_kb": 0, 00:18:01.565 "state": "online", 00:18:01.565 "raid_level": "raid1", 00:18:01.565 "superblock": true, 00:18:01.565 "num_base_bdevs": 2, 00:18:01.565 "num_base_bdevs_discovered": 1, 00:18:01.565 "num_base_bdevs_operational": 1, 00:18:01.565 "base_bdevs_list": [ 00:18:01.565 { 00:18:01.565 "name": null, 00:18:01.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.565 "is_configured": false, 00:18:01.565 "data_offset": 0, 00:18:01.565 "data_size": 7936 00:18:01.565 }, 00:18:01.565 { 00:18:01.565 "name": "BaseBdev2", 00:18:01.565 "uuid": "8b5daccd-91ce-55e7-a13e-e32d3a550344", 00:18:01.565 "is_configured": true, 00:18:01.565 "data_offset": 256, 00:18:01.565 "data_size": 7936 00:18:01.565 } 00:18:01.565 ] 00:18:01.565 }' 00:18:01.565 14:50:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.565 14:50:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.140 14:50:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:02.140 14:50:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.140 14:50:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.140 [2024-12-09 14:50:40.041986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:02.140 [2024-12-09 14:50:40.056603] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:18:02.140 14:50:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.140 14:50:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:02.140 [2024-12-09 14:50:40.058452] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:03.079 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:03.079 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:03.079 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:03.079 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:03.079 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:03.079 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.079 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.079 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.079 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.079 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.079 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:03.079 "name": "raid_bdev1", 00:18:03.079 "uuid": "a6828fe5-9d3f-443d-bc6d-9d200f99331c", 00:18:03.079 "strip_size_kb": 0, 00:18:03.079 "state": "online", 00:18:03.079 "raid_level": "raid1", 00:18:03.079 "superblock": true, 00:18:03.079 "num_base_bdevs": 2, 00:18:03.079 "num_base_bdevs_discovered": 2, 00:18:03.079 "num_base_bdevs_operational": 2, 00:18:03.079 "process": { 00:18:03.079 "type": "rebuild", 00:18:03.079 "target": "spare", 00:18:03.079 "progress": { 00:18:03.079 "blocks": 2560, 00:18:03.079 "percent": 32 00:18:03.079 } 00:18:03.079 }, 00:18:03.079 "base_bdevs_list": [ 00:18:03.079 { 00:18:03.079 "name": "spare", 00:18:03.079 "uuid": "bf256fd2-ad80-5a08-8783-99ae9fdb5cf7", 00:18:03.079 "is_configured": true, 00:18:03.079 "data_offset": 256, 00:18:03.079 "data_size": 7936 00:18:03.079 }, 00:18:03.079 { 00:18:03.079 "name": "BaseBdev2", 00:18:03.079 "uuid": "8b5daccd-91ce-55e7-a13e-e32d3a550344", 00:18:03.079 "is_configured": true, 00:18:03.079 "data_offset": 256, 00:18:03.079 "data_size": 7936 00:18:03.079 } 00:18:03.079 ] 00:18:03.079 }' 00:18:03.079 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:03.079 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:03.079 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:03.339 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:03.339 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:03.339 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.339 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.339 [2024-12-09 14:50:41.222503] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:03.339 [2024-12-09 14:50:41.264360] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:03.339 [2024-12-09 14:50:41.264495] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:03.339 [2024-12-09 14:50:41.264557] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:03.339 [2024-12-09 14:50:41.264583] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:03.339 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.339 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:03.339 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.339 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.339 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:03.339 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:03.339 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:03.339 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.339 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.339 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.339 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.339 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.339 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.339 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.339 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.339 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.339 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.339 "name": "raid_bdev1", 00:18:03.339 "uuid": "a6828fe5-9d3f-443d-bc6d-9d200f99331c", 00:18:03.339 "strip_size_kb": 0, 00:18:03.339 "state": "online", 00:18:03.339 "raid_level": "raid1", 00:18:03.339 "superblock": true, 00:18:03.339 "num_base_bdevs": 2, 00:18:03.339 "num_base_bdevs_discovered": 1, 00:18:03.339 "num_base_bdevs_operational": 1, 00:18:03.339 "base_bdevs_list": [ 00:18:03.339 { 00:18:03.339 "name": null, 00:18:03.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.339 "is_configured": false, 00:18:03.339 "data_offset": 0, 00:18:03.339 "data_size": 7936 00:18:03.339 }, 00:18:03.339 { 00:18:03.339 "name": "BaseBdev2", 00:18:03.339 "uuid": "8b5daccd-91ce-55e7-a13e-e32d3a550344", 00:18:03.339 "is_configured": true, 00:18:03.339 "data_offset": 256, 00:18:03.339 "data_size": 7936 00:18:03.339 } 00:18:03.339 ] 00:18:03.339 }' 00:18:03.339 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.339 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.908 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:03.908 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:03.908 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:03.908 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:03.908 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:03.908 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.908 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.908 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.908 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.908 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.908 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:03.908 "name": "raid_bdev1", 00:18:03.908 "uuid": "a6828fe5-9d3f-443d-bc6d-9d200f99331c", 00:18:03.908 "strip_size_kb": 0, 00:18:03.908 "state": "online", 00:18:03.908 "raid_level": "raid1", 00:18:03.908 "superblock": true, 00:18:03.908 "num_base_bdevs": 2, 00:18:03.908 "num_base_bdevs_discovered": 1, 00:18:03.908 "num_base_bdevs_operational": 1, 00:18:03.908 "base_bdevs_list": [ 00:18:03.908 { 00:18:03.908 "name": null, 00:18:03.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.908 "is_configured": false, 00:18:03.908 "data_offset": 0, 00:18:03.908 "data_size": 7936 00:18:03.908 }, 00:18:03.908 { 00:18:03.908 "name": "BaseBdev2", 00:18:03.908 "uuid": "8b5daccd-91ce-55e7-a13e-e32d3a550344", 00:18:03.908 "is_configured": true, 00:18:03.908 "data_offset": 256, 00:18:03.908 "data_size": 7936 00:18:03.908 } 00:18:03.908 ] 00:18:03.908 }' 00:18:03.908 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:03.908 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:03.908 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:03.908 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:03.908 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:03.908 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.909 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.909 [2024-12-09 14:50:41.880139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:03.909 [2024-12-09 14:50:41.894013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:03.909 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.909 14:50:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:03.909 [2024-12-09 14:50:41.895837] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:04.848 14:50:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:04.848 14:50:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:04.848 14:50:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:04.848 14:50:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:04.848 14:50:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:04.848 14:50:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.848 14:50:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.848 14:50:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.848 14:50:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.848 14:50:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.848 14:50:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:04.848 "name": "raid_bdev1", 00:18:04.848 "uuid": "a6828fe5-9d3f-443d-bc6d-9d200f99331c", 00:18:04.848 "strip_size_kb": 0, 00:18:04.848 "state": "online", 00:18:04.849 "raid_level": "raid1", 00:18:04.849 "superblock": true, 00:18:04.849 "num_base_bdevs": 2, 00:18:04.849 "num_base_bdevs_discovered": 2, 00:18:04.849 "num_base_bdevs_operational": 2, 00:18:04.849 "process": { 00:18:04.849 "type": "rebuild", 00:18:04.849 "target": "spare", 00:18:04.849 "progress": { 00:18:04.849 "blocks": 2560, 00:18:04.849 "percent": 32 00:18:04.849 } 00:18:04.849 }, 00:18:04.849 "base_bdevs_list": [ 00:18:04.849 { 00:18:04.849 "name": "spare", 00:18:04.849 "uuid": "bf256fd2-ad80-5a08-8783-99ae9fdb5cf7", 00:18:04.849 "is_configured": true, 00:18:04.849 "data_offset": 256, 00:18:04.849 "data_size": 7936 00:18:04.849 }, 00:18:04.849 { 00:18:04.849 "name": "BaseBdev2", 00:18:04.849 "uuid": "8b5daccd-91ce-55e7-a13e-e32d3a550344", 00:18:04.849 "is_configured": true, 00:18:04.849 "data_offset": 256, 00:18:04.849 "data_size": 7936 00:18:04.849 } 00:18:04.849 ] 00:18:04.849 }' 00:18:04.849 14:50:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:05.108 14:50:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:05.108 14:50:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:05.108 14:50:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:05.108 14:50:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:05.108 14:50:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:05.108 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:05.108 14:50:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:05.108 14:50:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:05.109 14:50:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:05.109 14:50:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=715 00:18:05.109 14:50:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:05.109 14:50:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:05.109 14:50:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:05.109 14:50:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:05.109 14:50:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:05.109 14:50:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:05.109 14:50:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.109 14:50:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.109 14:50:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.109 14:50:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.109 14:50:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.109 14:50:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:05.109 "name": "raid_bdev1", 00:18:05.109 "uuid": "a6828fe5-9d3f-443d-bc6d-9d200f99331c", 00:18:05.109 "strip_size_kb": 0, 00:18:05.109 "state": "online", 00:18:05.109 "raid_level": "raid1", 00:18:05.109 "superblock": true, 00:18:05.109 "num_base_bdevs": 2, 00:18:05.109 "num_base_bdevs_discovered": 2, 00:18:05.109 "num_base_bdevs_operational": 2, 00:18:05.109 "process": { 00:18:05.109 "type": "rebuild", 00:18:05.109 "target": "spare", 00:18:05.109 "progress": { 00:18:05.109 "blocks": 2816, 00:18:05.109 "percent": 35 00:18:05.109 } 00:18:05.109 }, 00:18:05.109 "base_bdevs_list": [ 00:18:05.109 { 00:18:05.109 "name": "spare", 00:18:05.109 "uuid": "bf256fd2-ad80-5a08-8783-99ae9fdb5cf7", 00:18:05.109 "is_configured": true, 00:18:05.109 "data_offset": 256, 00:18:05.109 "data_size": 7936 00:18:05.109 }, 00:18:05.109 { 00:18:05.109 "name": "BaseBdev2", 00:18:05.109 "uuid": "8b5daccd-91ce-55e7-a13e-e32d3a550344", 00:18:05.109 "is_configured": true, 00:18:05.109 "data_offset": 256, 00:18:05.109 "data_size": 7936 00:18:05.109 } 00:18:05.109 ] 00:18:05.109 }' 00:18:05.109 14:50:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:05.109 14:50:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:05.109 14:50:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:05.109 14:50:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:05.109 14:50:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:06.490 14:50:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:06.490 14:50:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:06.490 14:50:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:06.490 14:50:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:06.490 14:50:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:06.490 14:50:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:06.490 14:50:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.490 14:50:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.490 14:50:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.490 14:50:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.490 14:50:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.490 14:50:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.490 "name": "raid_bdev1", 00:18:06.490 "uuid": "a6828fe5-9d3f-443d-bc6d-9d200f99331c", 00:18:06.490 "strip_size_kb": 0, 00:18:06.490 "state": "online", 00:18:06.490 "raid_level": "raid1", 00:18:06.490 "superblock": true, 00:18:06.490 "num_base_bdevs": 2, 00:18:06.490 "num_base_bdevs_discovered": 2, 00:18:06.490 "num_base_bdevs_operational": 2, 00:18:06.490 "process": { 00:18:06.490 "type": "rebuild", 00:18:06.490 "target": "spare", 00:18:06.490 "progress": { 00:18:06.490 "blocks": 5888, 00:18:06.490 "percent": 74 00:18:06.490 } 00:18:06.490 }, 00:18:06.490 "base_bdevs_list": [ 00:18:06.490 { 00:18:06.490 "name": "spare", 00:18:06.490 "uuid": "bf256fd2-ad80-5a08-8783-99ae9fdb5cf7", 00:18:06.490 "is_configured": true, 00:18:06.490 "data_offset": 256, 00:18:06.490 "data_size": 7936 00:18:06.490 }, 00:18:06.490 { 00:18:06.490 "name": "BaseBdev2", 00:18:06.490 "uuid": "8b5daccd-91ce-55e7-a13e-e32d3a550344", 00:18:06.490 "is_configured": true, 00:18:06.490 "data_offset": 256, 00:18:06.490 "data_size": 7936 00:18:06.490 } 00:18:06.490 ] 00:18:06.490 }' 00:18:06.490 14:50:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:06.490 14:50:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:06.490 14:50:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:06.490 14:50:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:06.490 14:50:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:07.059 [2024-12-09 14:50:45.010828] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:07.059 [2024-12-09 14:50:45.010906] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:07.059 [2024-12-09 14:50:45.011027] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:07.318 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:07.318 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:07.318 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:07.318 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:07.318 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:07.318 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:07.318 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.318 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.318 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.318 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.318 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.318 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:07.318 "name": "raid_bdev1", 00:18:07.318 "uuid": "a6828fe5-9d3f-443d-bc6d-9d200f99331c", 00:18:07.318 "strip_size_kb": 0, 00:18:07.318 "state": "online", 00:18:07.318 "raid_level": "raid1", 00:18:07.318 "superblock": true, 00:18:07.318 "num_base_bdevs": 2, 00:18:07.318 "num_base_bdevs_discovered": 2, 00:18:07.318 "num_base_bdevs_operational": 2, 00:18:07.318 "base_bdevs_list": [ 00:18:07.318 { 00:18:07.318 "name": "spare", 00:18:07.318 "uuid": "bf256fd2-ad80-5a08-8783-99ae9fdb5cf7", 00:18:07.318 "is_configured": true, 00:18:07.318 "data_offset": 256, 00:18:07.318 "data_size": 7936 00:18:07.318 }, 00:18:07.318 { 00:18:07.318 "name": "BaseBdev2", 00:18:07.318 "uuid": "8b5daccd-91ce-55e7-a13e-e32d3a550344", 00:18:07.318 "is_configured": true, 00:18:07.318 "data_offset": 256, 00:18:07.318 "data_size": 7936 00:18:07.318 } 00:18:07.318 ] 00:18:07.318 }' 00:18:07.318 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:07.576 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:07.576 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:07.576 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:07.576 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:18:07.576 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:07.576 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:07.576 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:07.576 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:07.576 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:07.576 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.576 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.576 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.576 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.576 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.576 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:07.576 "name": "raid_bdev1", 00:18:07.576 "uuid": "a6828fe5-9d3f-443d-bc6d-9d200f99331c", 00:18:07.576 "strip_size_kb": 0, 00:18:07.576 "state": "online", 00:18:07.576 "raid_level": "raid1", 00:18:07.576 "superblock": true, 00:18:07.576 "num_base_bdevs": 2, 00:18:07.576 "num_base_bdevs_discovered": 2, 00:18:07.576 "num_base_bdevs_operational": 2, 00:18:07.576 "base_bdevs_list": [ 00:18:07.576 { 00:18:07.576 "name": "spare", 00:18:07.576 "uuid": "bf256fd2-ad80-5a08-8783-99ae9fdb5cf7", 00:18:07.577 "is_configured": true, 00:18:07.577 "data_offset": 256, 00:18:07.577 "data_size": 7936 00:18:07.577 }, 00:18:07.577 { 00:18:07.577 "name": "BaseBdev2", 00:18:07.577 "uuid": "8b5daccd-91ce-55e7-a13e-e32d3a550344", 00:18:07.577 "is_configured": true, 00:18:07.577 "data_offset": 256, 00:18:07.577 "data_size": 7936 00:18:07.577 } 00:18:07.577 ] 00:18:07.577 }' 00:18:07.577 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:07.577 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:07.577 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:07.577 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:07.577 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:07.577 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:07.577 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:07.577 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:07.577 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:07.577 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:07.577 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.577 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.577 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.577 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.577 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.577 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.577 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.577 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.577 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.835 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.835 "name": "raid_bdev1", 00:18:07.835 "uuid": "a6828fe5-9d3f-443d-bc6d-9d200f99331c", 00:18:07.835 "strip_size_kb": 0, 00:18:07.835 "state": "online", 00:18:07.835 "raid_level": "raid1", 00:18:07.835 "superblock": true, 00:18:07.835 "num_base_bdevs": 2, 00:18:07.835 "num_base_bdevs_discovered": 2, 00:18:07.835 "num_base_bdevs_operational": 2, 00:18:07.835 "base_bdevs_list": [ 00:18:07.835 { 00:18:07.835 "name": "spare", 00:18:07.835 "uuid": "bf256fd2-ad80-5a08-8783-99ae9fdb5cf7", 00:18:07.835 "is_configured": true, 00:18:07.835 "data_offset": 256, 00:18:07.835 "data_size": 7936 00:18:07.835 }, 00:18:07.835 { 00:18:07.835 "name": "BaseBdev2", 00:18:07.835 "uuid": "8b5daccd-91ce-55e7-a13e-e32d3a550344", 00:18:07.835 "is_configured": true, 00:18:07.835 "data_offset": 256, 00:18:07.835 "data_size": 7936 00:18:07.835 } 00:18:07.835 ] 00:18:07.835 }' 00:18:07.835 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.835 14:50:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.095 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:08.095 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.095 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.095 [2024-12-09 14:50:46.122971] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:08.095 [2024-12-09 14:50:46.123057] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:08.095 [2024-12-09 14:50:46.123194] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:08.095 [2024-12-09 14:50:46.123312] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:08.095 [2024-12-09 14:50:46.123374] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:08.095 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.095 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.095 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.095 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:18:08.095 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.095 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.095 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:08.095 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:08.095 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:08.095 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:08.095 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:08.095 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:08.095 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:08.095 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:08.095 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:08.095 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:08.095 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:08.095 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:08.095 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:08.354 /dev/nbd0 00:18:08.354 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:08.354 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:08.354 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:08.354 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:08.354 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:08.354 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:08.354 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:08.354 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:08.354 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:08.354 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:08.354 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:08.354 1+0 records in 00:18:08.354 1+0 records out 00:18:08.354 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329214 s, 12.4 MB/s 00:18:08.354 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:08.354 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:08.354 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:08.354 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:08.354 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:08.354 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:08.354 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:08.355 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:08.614 /dev/nbd1 00:18:08.614 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:08.614 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:08.614 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:08.614 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:08.614 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:08.614 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:08.614 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:08.614 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:08.614 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:08.614 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:08.614 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:08.614 1+0 records in 00:18:08.614 1+0 records out 00:18:08.614 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000451585 s, 9.1 MB/s 00:18:08.614 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:08.614 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:08.614 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:08.614 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:08.614 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:08.614 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:08.614 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:08.614 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:08.873 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:08.873 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:08.873 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:08.873 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:08.873 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:08.873 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:08.873 14:50:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:09.132 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:09.132 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:09.132 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:09.132 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:09.132 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:09.132 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:09.132 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:09.132 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:09.132 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:09.132 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:09.392 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:09.392 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:09.392 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:09.392 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:09.392 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:09.392 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:09.392 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:09.392 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:09.392 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:09.392 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:09.392 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.392 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.392 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.392 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:09.392 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.392 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.392 [2024-12-09 14:50:47.384828] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:09.392 [2024-12-09 14:50:47.384890] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.392 [2024-12-09 14:50:47.384914] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:09.392 [2024-12-09 14:50:47.384925] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.392 [2024-12-09 14:50:47.387042] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.392 [2024-12-09 14:50:47.387081] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:09.392 [2024-12-09 14:50:47.387150] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:09.392 [2024-12-09 14:50:47.387212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:09.392 [2024-12-09 14:50:47.387385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:09.392 spare 00:18:09.392 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.392 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:09.392 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.392 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.392 [2024-12-09 14:50:47.487292] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:09.392 [2024-12-09 14:50:47.487335] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:09.392 [2024-12-09 14:50:47.487486] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:09.392 [2024-12-09 14:50:47.487679] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:09.392 [2024-12-09 14:50:47.487694] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:09.392 [2024-12-09 14:50:47.487849] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.392 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.392 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:09.392 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.392 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.392 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:09.392 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:09.392 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:09.392 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.392 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.392 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.392 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.392 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.392 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.392 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.393 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.652 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.652 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.652 "name": "raid_bdev1", 00:18:09.652 "uuid": "a6828fe5-9d3f-443d-bc6d-9d200f99331c", 00:18:09.652 "strip_size_kb": 0, 00:18:09.652 "state": "online", 00:18:09.652 "raid_level": "raid1", 00:18:09.652 "superblock": true, 00:18:09.652 "num_base_bdevs": 2, 00:18:09.652 "num_base_bdevs_discovered": 2, 00:18:09.652 "num_base_bdevs_operational": 2, 00:18:09.652 "base_bdevs_list": [ 00:18:09.652 { 00:18:09.652 "name": "spare", 00:18:09.652 "uuid": "bf256fd2-ad80-5a08-8783-99ae9fdb5cf7", 00:18:09.652 "is_configured": true, 00:18:09.652 "data_offset": 256, 00:18:09.652 "data_size": 7936 00:18:09.652 }, 00:18:09.652 { 00:18:09.652 "name": "BaseBdev2", 00:18:09.652 "uuid": "8b5daccd-91ce-55e7-a13e-e32d3a550344", 00:18:09.652 "is_configured": true, 00:18:09.652 "data_offset": 256, 00:18:09.652 "data_size": 7936 00:18:09.652 } 00:18:09.652 ] 00:18:09.652 }' 00:18:09.652 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.652 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.912 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:09.912 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:09.912 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:09.912 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:09.912 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:09.912 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.912 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.912 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.912 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.912 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.912 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:09.912 "name": "raid_bdev1", 00:18:09.912 "uuid": "a6828fe5-9d3f-443d-bc6d-9d200f99331c", 00:18:09.912 "strip_size_kb": 0, 00:18:09.912 "state": "online", 00:18:09.912 "raid_level": "raid1", 00:18:09.912 "superblock": true, 00:18:09.912 "num_base_bdevs": 2, 00:18:09.912 "num_base_bdevs_discovered": 2, 00:18:09.912 "num_base_bdevs_operational": 2, 00:18:09.912 "base_bdevs_list": [ 00:18:09.912 { 00:18:09.912 "name": "spare", 00:18:09.912 "uuid": "bf256fd2-ad80-5a08-8783-99ae9fdb5cf7", 00:18:09.912 "is_configured": true, 00:18:09.912 "data_offset": 256, 00:18:09.912 "data_size": 7936 00:18:09.912 }, 00:18:09.912 { 00:18:09.912 "name": "BaseBdev2", 00:18:09.912 "uuid": "8b5daccd-91ce-55e7-a13e-e32d3a550344", 00:18:09.912 "is_configured": true, 00:18:09.912 "data_offset": 256, 00:18:09.912 "data_size": 7936 00:18:09.912 } 00:18:09.912 ] 00:18:09.912 }' 00:18:09.912 14:50:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:09.912 14:50:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:09.912 14:50:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:10.172 14:50:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:10.172 14:50:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:10.172 14:50:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.172 14:50:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.172 14:50:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.172 14:50:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.172 14:50:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:10.172 14:50:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:10.172 14:50:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.172 14:50:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.172 [2024-12-09 14:50:48.099704] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:10.172 14:50:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.172 14:50:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:10.172 14:50:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.172 14:50:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.172 14:50:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:10.172 14:50:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:10.172 14:50:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:10.172 14:50:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.172 14:50:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.172 14:50:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.172 14:50:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.172 14:50:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.172 14:50:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.172 14:50:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.172 14:50:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.172 14:50:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.172 14:50:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.172 "name": "raid_bdev1", 00:18:10.172 "uuid": "a6828fe5-9d3f-443d-bc6d-9d200f99331c", 00:18:10.172 "strip_size_kb": 0, 00:18:10.172 "state": "online", 00:18:10.172 "raid_level": "raid1", 00:18:10.172 "superblock": true, 00:18:10.172 "num_base_bdevs": 2, 00:18:10.172 "num_base_bdevs_discovered": 1, 00:18:10.172 "num_base_bdevs_operational": 1, 00:18:10.172 "base_bdevs_list": [ 00:18:10.172 { 00:18:10.172 "name": null, 00:18:10.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.172 "is_configured": false, 00:18:10.172 "data_offset": 0, 00:18:10.172 "data_size": 7936 00:18:10.172 }, 00:18:10.172 { 00:18:10.172 "name": "BaseBdev2", 00:18:10.172 "uuid": "8b5daccd-91ce-55e7-a13e-e32d3a550344", 00:18:10.172 "is_configured": true, 00:18:10.172 "data_offset": 256, 00:18:10.172 "data_size": 7936 00:18:10.172 } 00:18:10.172 ] 00:18:10.172 }' 00:18:10.172 14:50:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.172 14:50:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.741 14:50:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:10.741 14:50:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.741 14:50:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.741 [2024-12-09 14:50:48.578987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:10.741 [2024-12-09 14:50:48.579189] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:10.741 [2024-12-09 14:50:48.579220] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:10.741 [2024-12-09 14:50:48.579270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:10.741 [2024-12-09 14:50:48.594005] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:10.741 14:50:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.741 14:50:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:10.741 [2024-12-09 14:50:48.595878] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:11.705 14:50:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:11.705 14:50:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.705 14:50:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:11.705 14:50:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:11.705 14:50:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.705 14:50:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.705 14:50:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.705 14:50:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.705 14:50:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.705 14:50:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.705 14:50:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:11.705 "name": "raid_bdev1", 00:18:11.705 "uuid": "a6828fe5-9d3f-443d-bc6d-9d200f99331c", 00:18:11.705 "strip_size_kb": 0, 00:18:11.705 "state": "online", 00:18:11.705 "raid_level": "raid1", 00:18:11.705 "superblock": true, 00:18:11.705 "num_base_bdevs": 2, 00:18:11.705 "num_base_bdevs_discovered": 2, 00:18:11.705 "num_base_bdevs_operational": 2, 00:18:11.705 "process": { 00:18:11.705 "type": "rebuild", 00:18:11.705 "target": "spare", 00:18:11.705 "progress": { 00:18:11.705 "blocks": 2560, 00:18:11.705 "percent": 32 00:18:11.705 } 00:18:11.705 }, 00:18:11.705 "base_bdevs_list": [ 00:18:11.705 { 00:18:11.705 "name": "spare", 00:18:11.705 "uuid": "bf256fd2-ad80-5a08-8783-99ae9fdb5cf7", 00:18:11.705 "is_configured": true, 00:18:11.705 "data_offset": 256, 00:18:11.705 "data_size": 7936 00:18:11.705 }, 00:18:11.705 { 00:18:11.705 "name": "BaseBdev2", 00:18:11.705 "uuid": "8b5daccd-91ce-55e7-a13e-e32d3a550344", 00:18:11.705 "is_configured": true, 00:18:11.705 "data_offset": 256, 00:18:11.705 "data_size": 7936 00:18:11.705 } 00:18:11.705 ] 00:18:11.705 }' 00:18:11.705 14:50:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:11.705 14:50:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:11.705 14:50:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:11.705 14:50:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:11.705 14:50:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:11.705 14:50:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.705 14:50:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.705 [2024-12-09 14:50:49.759716] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:11.705 [2024-12-09 14:50:49.801201] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:11.705 [2024-12-09 14:50:49.801276] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:11.705 [2024-12-09 14:50:49.801290] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:11.705 [2024-12-09 14:50:49.801309] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:11.705 14:50:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.705 14:50:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:11.705 14:50:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:11.705 14:50:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:11.705 14:50:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:11.705 14:50:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:11.705 14:50:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:11.705 14:50:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.705 14:50:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.705 14:50:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.705 14:50:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.964 14:50:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.964 14:50:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.964 14:50:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.964 14:50:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.964 14:50:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.964 14:50:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.964 "name": "raid_bdev1", 00:18:11.964 "uuid": "a6828fe5-9d3f-443d-bc6d-9d200f99331c", 00:18:11.964 "strip_size_kb": 0, 00:18:11.964 "state": "online", 00:18:11.964 "raid_level": "raid1", 00:18:11.965 "superblock": true, 00:18:11.965 "num_base_bdevs": 2, 00:18:11.965 "num_base_bdevs_discovered": 1, 00:18:11.965 "num_base_bdevs_operational": 1, 00:18:11.965 "base_bdevs_list": [ 00:18:11.965 { 00:18:11.965 "name": null, 00:18:11.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.965 "is_configured": false, 00:18:11.965 "data_offset": 0, 00:18:11.965 "data_size": 7936 00:18:11.965 }, 00:18:11.965 { 00:18:11.965 "name": "BaseBdev2", 00:18:11.965 "uuid": "8b5daccd-91ce-55e7-a13e-e32d3a550344", 00:18:11.965 "is_configured": true, 00:18:11.965 "data_offset": 256, 00:18:11.965 "data_size": 7936 00:18:11.965 } 00:18:11.965 ] 00:18:11.965 }' 00:18:11.965 14:50:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.965 14:50:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.224 14:50:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:12.224 14:50:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.224 14:50:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.224 [2024-12-09 14:50:50.272222] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:12.224 [2024-12-09 14:50:50.272293] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.224 [2024-12-09 14:50:50.272319] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:12.224 [2024-12-09 14:50:50.272330] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.224 [2024-12-09 14:50:50.272638] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.224 [2024-12-09 14:50:50.272665] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:12.224 [2024-12-09 14:50:50.272728] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:12.224 [2024-12-09 14:50:50.272747] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:12.224 [2024-12-09 14:50:50.272758] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:12.224 [2024-12-09 14:50:50.272777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:12.224 [2024-12-09 14:50:50.286858] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:12.224 spare 00:18:12.224 14:50:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.224 14:50:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:12.224 [2024-12-09 14:50:50.288695] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:13.604 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:13.604 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.604 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:13.604 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:13.604 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.604 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.604 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.604 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.604 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.604 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.604 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.604 "name": "raid_bdev1", 00:18:13.604 "uuid": "a6828fe5-9d3f-443d-bc6d-9d200f99331c", 00:18:13.604 "strip_size_kb": 0, 00:18:13.604 "state": "online", 00:18:13.604 "raid_level": "raid1", 00:18:13.604 "superblock": true, 00:18:13.604 "num_base_bdevs": 2, 00:18:13.604 "num_base_bdevs_discovered": 2, 00:18:13.604 "num_base_bdevs_operational": 2, 00:18:13.604 "process": { 00:18:13.604 "type": "rebuild", 00:18:13.604 "target": "spare", 00:18:13.604 "progress": { 00:18:13.604 "blocks": 2560, 00:18:13.604 "percent": 32 00:18:13.604 } 00:18:13.604 }, 00:18:13.604 "base_bdevs_list": [ 00:18:13.604 { 00:18:13.604 "name": "spare", 00:18:13.604 "uuid": "bf256fd2-ad80-5a08-8783-99ae9fdb5cf7", 00:18:13.604 "is_configured": true, 00:18:13.604 "data_offset": 256, 00:18:13.604 "data_size": 7936 00:18:13.604 }, 00:18:13.604 { 00:18:13.604 "name": "BaseBdev2", 00:18:13.604 "uuid": "8b5daccd-91ce-55e7-a13e-e32d3a550344", 00:18:13.604 "is_configured": true, 00:18:13.604 "data_offset": 256, 00:18:13.604 "data_size": 7936 00:18:13.604 } 00:18:13.604 ] 00:18:13.604 }' 00:18:13.604 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.604 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:13.604 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.604 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:13.604 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:13.604 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.604 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.604 [2024-12-09 14:50:51.452549] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:13.604 [2024-12-09 14:50:51.494105] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:13.604 [2024-12-09 14:50:51.494177] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:13.604 [2024-12-09 14:50:51.494193] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:13.604 [2024-12-09 14:50:51.494200] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:13.604 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.604 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:13.604 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:13.604 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:13.604 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:13.604 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:13.604 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:13.604 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.604 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.604 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.604 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.604 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.604 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.604 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.604 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.604 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.604 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.604 "name": "raid_bdev1", 00:18:13.604 "uuid": "a6828fe5-9d3f-443d-bc6d-9d200f99331c", 00:18:13.604 "strip_size_kb": 0, 00:18:13.604 "state": "online", 00:18:13.604 "raid_level": "raid1", 00:18:13.604 "superblock": true, 00:18:13.604 "num_base_bdevs": 2, 00:18:13.604 "num_base_bdevs_discovered": 1, 00:18:13.604 "num_base_bdevs_operational": 1, 00:18:13.604 "base_bdevs_list": [ 00:18:13.604 { 00:18:13.604 "name": null, 00:18:13.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.604 "is_configured": false, 00:18:13.604 "data_offset": 0, 00:18:13.604 "data_size": 7936 00:18:13.604 }, 00:18:13.604 { 00:18:13.604 "name": "BaseBdev2", 00:18:13.604 "uuid": "8b5daccd-91ce-55e7-a13e-e32d3a550344", 00:18:13.604 "is_configured": true, 00:18:13.604 "data_offset": 256, 00:18:13.604 "data_size": 7936 00:18:13.604 } 00:18:13.604 ] 00:18:13.604 }' 00:18:13.604 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.604 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.863 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:13.863 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.863 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:13.863 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:13.863 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.863 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.863 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.863 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.863 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.122 14:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.122 14:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.122 "name": "raid_bdev1", 00:18:14.122 "uuid": "a6828fe5-9d3f-443d-bc6d-9d200f99331c", 00:18:14.122 "strip_size_kb": 0, 00:18:14.122 "state": "online", 00:18:14.122 "raid_level": "raid1", 00:18:14.122 "superblock": true, 00:18:14.122 "num_base_bdevs": 2, 00:18:14.122 "num_base_bdevs_discovered": 1, 00:18:14.122 "num_base_bdevs_operational": 1, 00:18:14.122 "base_bdevs_list": [ 00:18:14.122 { 00:18:14.122 "name": null, 00:18:14.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.122 "is_configured": false, 00:18:14.122 "data_offset": 0, 00:18:14.122 "data_size": 7936 00:18:14.122 }, 00:18:14.122 { 00:18:14.122 "name": "BaseBdev2", 00:18:14.122 "uuid": "8b5daccd-91ce-55e7-a13e-e32d3a550344", 00:18:14.122 "is_configured": true, 00:18:14.122 "data_offset": 256, 00:18:14.122 "data_size": 7936 00:18:14.123 } 00:18:14.123 ] 00:18:14.123 }' 00:18:14.123 14:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.123 14:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:14.123 14:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.123 14:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:14.123 14:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:14.123 14:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.123 14:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.123 14:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.123 14:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:14.123 14:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.123 14:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.123 [2024-12-09 14:50:52.121487] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:14.123 [2024-12-09 14:50:52.121545] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:14.123 [2024-12-09 14:50:52.121566] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:14.123 [2024-12-09 14:50:52.121585] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:14.123 [2024-12-09 14:50:52.121809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:14.123 [2024-12-09 14:50:52.121828] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:14.123 [2024-12-09 14:50:52.121880] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:14.123 [2024-12-09 14:50:52.121893] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:14.123 [2024-12-09 14:50:52.121902] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:14.123 [2024-12-09 14:50:52.121912] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:14.123 BaseBdev1 00:18:14.123 14:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.123 14:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:15.061 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:15.061 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:15.061 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:15.061 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:15.061 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:15.061 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:15.061 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.061 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.061 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.061 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.061 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.061 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.061 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.061 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.061 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.320 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.320 "name": "raid_bdev1", 00:18:15.320 "uuid": "a6828fe5-9d3f-443d-bc6d-9d200f99331c", 00:18:15.320 "strip_size_kb": 0, 00:18:15.320 "state": "online", 00:18:15.320 "raid_level": "raid1", 00:18:15.320 "superblock": true, 00:18:15.320 "num_base_bdevs": 2, 00:18:15.320 "num_base_bdevs_discovered": 1, 00:18:15.320 "num_base_bdevs_operational": 1, 00:18:15.320 "base_bdevs_list": [ 00:18:15.320 { 00:18:15.320 "name": null, 00:18:15.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.320 "is_configured": false, 00:18:15.320 "data_offset": 0, 00:18:15.320 "data_size": 7936 00:18:15.320 }, 00:18:15.320 { 00:18:15.320 "name": "BaseBdev2", 00:18:15.320 "uuid": "8b5daccd-91ce-55e7-a13e-e32d3a550344", 00:18:15.320 "is_configured": true, 00:18:15.320 "data_offset": 256, 00:18:15.320 "data_size": 7936 00:18:15.320 } 00:18:15.320 ] 00:18:15.320 }' 00:18:15.320 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.320 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.579 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:15.579 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:15.579 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:15.579 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:15.579 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:15.579 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.579 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.579 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.579 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.579 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.579 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:15.579 "name": "raid_bdev1", 00:18:15.579 "uuid": "a6828fe5-9d3f-443d-bc6d-9d200f99331c", 00:18:15.579 "strip_size_kb": 0, 00:18:15.579 "state": "online", 00:18:15.579 "raid_level": "raid1", 00:18:15.579 "superblock": true, 00:18:15.579 "num_base_bdevs": 2, 00:18:15.579 "num_base_bdevs_discovered": 1, 00:18:15.579 "num_base_bdevs_operational": 1, 00:18:15.579 "base_bdevs_list": [ 00:18:15.579 { 00:18:15.579 "name": null, 00:18:15.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.579 "is_configured": false, 00:18:15.579 "data_offset": 0, 00:18:15.579 "data_size": 7936 00:18:15.579 }, 00:18:15.579 { 00:18:15.579 "name": "BaseBdev2", 00:18:15.579 "uuid": "8b5daccd-91ce-55e7-a13e-e32d3a550344", 00:18:15.579 "is_configured": true, 00:18:15.579 "data_offset": 256, 00:18:15.579 "data_size": 7936 00:18:15.579 } 00:18:15.579 ] 00:18:15.579 }' 00:18:15.579 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:15.579 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:15.579 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:15.579 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:15.579 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:15.579 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:18:15.579 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:15.579 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:15.579 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:15.579 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:15.579 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:15.579 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:15.579 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.579 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.579 [2024-12-09 14:50:53.687114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:15.579 [2024-12-09 14:50:53.687290] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:15.579 [2024-12-09 14:50:53.687311] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:15.579 request: 00:18:15.579 { 00:18:15.579 "base_bdev": "BaseBdev1", 00:18:15.579 "raid_bdev": "raid_bdev1", 00:18:15.579 "method": "bdev_raid_add_base_bdev", 00:18:15.579 "req_id": 1 00:18:15.579 } 00:18:15.579 Got JSON-RPC error response 00:18:15.579 response: 00:18:15.579 { 00:18:15.579 "code": -22, 00:18:15.579 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:15.579 } 00:18:15.579 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:15.579 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:18:15.579 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:15.579 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:15.579 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:15.579 14:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:16.960 14:50:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:16.960 14:50:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.960 14:50:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.960 14:50:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.960 14:50:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.960 14:50:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:16.960 14:50:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.960 14:50:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.960 14:50:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.960 14:50:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.960 14:50:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.960 14:50:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.960 14:50:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.960 14:50:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.960 14:50:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.960 14:50:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.960 "name": "raid_bdev1", 00:18:16.960 "uuid": "a6828fe5-9d3f-443d-bc6d-9d200f99331c", 00:18:16.960 "strip_size_kb": 0, 00:18:16.960 "state": "online", 00:18:16.960 "raid_level": "raid1", 00:18:16.960 "superblock": true, 00:18:16.960 "num_base_bdevs": 2, 00:18:16.960 "num_base_bdevs_discovered": 1, 00:18:16.960 "num_base_bdevs_operational": 1, 00:18:16.960 "base_bdevs_list": [ 00:18:16.960 { 00:18:16.960 "name": null, 00:18:16.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.960 "is_configured": false, 00:18:16.960 "data_offset": 0, 00:18:16.960 "data_size": 7936 00:18:16.960 }, 00:18:16.960 { 00:18:16.960 "name": "BaseBdev2", 00:18:16.960 "uuid": "8b5daccd-91ce-55e7-a13e-e32d3a550344", 00:18:16.960 "is_configured": true, 00:18:16.960 "data_offset": 256, 00:18:16.960 "data_size": 7936 00:18:16.960 } 00:18:16.960 ] 00:18:16.960 }' 00:18:16.960 14:50:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.960 14:50:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.221 14:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:17.221 14:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:17.221 14:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:17.221 14:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:17.221 14:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:17.221 14:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.221 14:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.221 14:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.221 14:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.221 14:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.221 14:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:17.221 "name": "raid_bdev1", 00:18:17.221 "uuid": "a6828fe5-9d3f-443d-bc6d-9d200f99331c", 00:18:17.221 "strip_size_kb": 0, 00:18:17.221 "state": "online", 00:18:17.221 "raid_level": "raid1", 00:18:17.221 "superblock": true, 00:18:17.221 "num_base_bdevs": 2, 00:18:17.221 "num_base_bdevs_discovered": 1, 00:18:17.221 "num_base_bdevs_operational": 1, 00:18:17.221 "base_bdevs_list": [ 00:18:17.221 { 00:18:17.221 "name": null, 00:18:17.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.221 "is_configured": false, 00:18:17.221 "data_offset": 0, 00:18:17.221 "data_size": 7936 00:18:17.221 }, 00:18:17.221 { 00:18:17.221 "name": "BaseBdev2", 00:18:17.221 "uuid": "8b5daccd-91ce-55e7-a13e-e32d3a550344", 00:18:17.221 "is_configured": true, 00:18:17.221 "data_offset": 256, 00:18:17.221 "data_size": 7936 00:18:17.221 } 00:18:17.221 ] 00:18:17.221 }' 00:18:17.221 14:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:17.221 14:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:17.221 14:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:17.221 14:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:17.221 14:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 89088 00:18:17.221 14:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 89088 ']' 00:18:17.221 14:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 89088 00:18:17.221 14:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:17.221 14:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:17.221 14:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89088 00:18:17.221 killing process with pid 89088 00:18:17.221 Received shutdown signal, test time was about 60.000000 seconds 00:18:17.221 00:18:17.221 Latency(us) 00:18:17.221 [2024-12-09T14:50:55.343Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.221 [2024-12-09T14:50:55.343Z] =================================================================================================================== 00:18:17.221 [2024-12-09T14:50:55.343Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:17.221 14:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:17.221 14:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:17.221 14:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89088' 00:18:17.221 14:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 89088 00:18:17.221 [2024-12-09 14:50:55.329451] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:17.221 [2024-12-09 14:50:55.329577] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:17.221 14:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 89088 00:18:17.221 [2024-12-09 14:50:55.329656] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:17.221 [2024-12-09 14:50:55.329668] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:17.789 [2024-12-09 14:50:55.648738] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:18.727 14:50:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:18:18.727 00:18:18.727 real 0m19.992s 00:18:18.727 user 0m26.312s 00:18:18.727 sys 0m2.587s 00:18:18.727 14:50:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:18.727 14:50:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.727 ************************************ 00:18:18.727 END TEST raid_rebuild_test_sb_md_separate 00:18:18.727 ************************************ 00:18:18.727 14:50:56 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:18:18.727 14:50:56 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:18:18.727 14:50:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:18.727 14:50:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:18.727 14:50:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:18.727 ************************************ 00:18:18.727 START TEST raid_state_function_test_sb_md_interleaved 00:18:18.727 ************************************ 00:18:18.727 14:50:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:18.727 14:50:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:18.727 14:50:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:18.727 14:50:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:18.727 14:50:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:18.727 14:50:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:18.727 14:50:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:18.727 14:50:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:18.727 14:50:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:18.727 14:50:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:18.727 14:50:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:18.727 14:50:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:18.727 14:50:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:18.727 14:50:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:18.727 14:50:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:18.727 14:50:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:18.727 14:50:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:18.727 14:50:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:18.727 14:50:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:18.727 14:50:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:18.727 14:50:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:18.727 14:50:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:18.727 14:50:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:18.727 14:50:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=89780 00:18:18.727 Process raid pid: 89780 00:18:18.727 14:50:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:18.727 14:50:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 89780' 00:18:18.727 14:50:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 89780 00:18:18.727 14:50:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89780 ']' 00:18:18.727 14:50:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.727 14:50:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:18.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.727 14:50:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.727 14:50:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:18.727 14:50:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.987 [2024-12-09 14:50:56.914011] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:18:18.987 [2024-12-09 14:50:56.914144] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:18.987 [2024-12-09 14:50:57.073945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.245 [2024-12-09 14:50:57.186236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.503 [2024-12-09 14:50:57.384494] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:19.503 [2024-12-09 14:50:57.384535] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:19.762 14:50:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:19.762 14:50:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:19.762 14:50:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:19.762 14:50:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.762 14:50:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.762 [2024-12-09 14:50:57.749650] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:19.762 [2024-12-09 14:50:57.749703] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:19.762 [2024-12-09 14:50:57.749714] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:19.762 [2024-12-09 14:50:57.749723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:19.762 14:50:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.762 14:50:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:19.762 14:50:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:19.762 14:50:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:19.762 14:50:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:19.762 14:50:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:19.762 14:50:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:19.762 14:50:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.762 14:50:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.762 14:50:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.762 14:50:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.762 14:50:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.762 14:50:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:19.762 14:50:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.762 14:50:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.762 14:50:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.762 14:50:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.762 "name": "Existed_Raid", 00:18:19.762 "uuid": "8e9e3b2f-c512-4b23-9ace-aeef51d0d3be", 00:18:19.762 "strip_size_kb": 0, 00:18:19.762 "state": "configuring", 00:18:19.762 "raid_level": "raid1", 00:18:19.762 "superblock": true, 00:18:19.762 "num_base_bdevs": 2, 00:18:19.762 "num_base_bdevs_discovered": 0, 00:18:19.762 "num_base_bdevs_operational": 2, 00:18:19.762 "base_bdevs_list": [ 00:18:19.762 { 00:18:19.762 "name": "BaseBdev1", 00:18:19.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.762 "is_configured": false, 00:18:19.762 "data_offset": 0, 00:18:19.762 "data_size": 0 00:18:19.762 }, 00:18:19.762 { 00:18:19.762 "name": "BaseBdev2", 00:18:19.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.762 "is_configured": false, 00:18:19.762 "data_offset": 0, 00:18:19.762 "data_size": 0 00:18:19.762 } 00:18:19.762 ] 00:18:19.762 }' 00:18:19.762 14:50:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.762 14:50:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.331 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:20.331 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.331 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.331 [2024-12-09 14:50:58.200789] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:20.331 [2024-12-09 14:50:58.200830] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:20.331 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.331 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:20.331 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.331 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.331 [2024-12-09 14:50:58.212757] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:20.331 [2024-12-09 14:50:58.212796] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:20.331 [2024-12-09 14:50:58.212805] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:20.331 [2024-12-09 14:50:58.212815] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:20.331 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.331 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:18:20.331 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.331 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.332 [2024-12-09 14:50:58.261350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:20.332 BaseBdev1 00:18:20.332 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.332 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:20.332 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:20.332 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:20.332 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:18:20.332 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:20.332 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:20.332 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:20.332 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.332 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.332 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.332 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:20.332 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.332 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.332 [ 00:18:20.332 { 00:18:20.332 "name": "BaseBdev1", 00:18:20.332 "aliases": [ 00:18:20.332 "9a1a1125-1e2d-4680-93b6-5341ce9500db" 00:18:20.332 ], 00:18:20.332 "product_name": "Malloc disk", 00:18:20.332 "block_size": 4128, 00:18:20.332 "num_blocks": 8192, 00:18:20.332 "uuid": "9a1a1125-1e2d-4680-93b6-5341ce9500db", 00:18:20.332 "md_size": 32, 00:18:20.332 "md_interleave": true, 00:18:20.332 "dif_type": 0, 00:18:20.332 "assigned_rate_limits": { 00:18:20.332 "rw_ios_per_sec": 0, 00:18:20.332 "rw_mbytes_per_sec": 0, 00:18:20.332 "r_mbytes_per_sec": 0, 00:18:20.332 "w_mbytes_per_sec": 0 00:18:20.332 }, 00:18:20.332 "claimed": true, 00:18:20.332 "claim_type": "exclusive_write", 00:18:20.332 "zoned": false, 00:18:20.332 "supported_io_types": { 00:18:20.332 "read": true, 00:18:20.332 "write": true, 00:18:20.332 "unmap": true, 00:18:20.332 "flush": true, 00:18:20.332 "reset": true, 00:18:20.332 "nvme_admin": false, 00:18:20.332 "nvme_io": false, 00:18:20.332 "nvme_io_md": false, 00:18:20.332 "write_zeroes": true, 00:18:20.332 "zcopy": true, 00:18:20.332 "get_zone_info": false, 00:18:20.332 "zone_management": false, 00:18:20.332 "zone_append": false, 00:18:20.332 "compare": false, 00:18:20.332 "compare_and_write": false, 00:18:20.332 "abort": true, 00:18:20.332 "seek_hole": false, 00:18:20.332 "seek_data": false, 00:18:20.332 "copy": true, 00:18:20.332 "nvme_iov_md": false 00:18:20.332 }, 00:18:20.332 "memory_domains": [ 00:18:20.332 { 00:18:20.332 "dma_device_id": "system", 00:18:20.332 "dma_device_type": 1 00:18:20.332 }, 00:18:20.332 { 00:18:20.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.332 "dma_device_type": 2 00:18:20.332 } 00:18:20.332 ], 00:18:20.332 "driver_specific": {} 00:18:20.332 } 00:18:20.332 ] 00:18:20.332 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.332 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:18:20.332 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:20.332 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:20.332 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:20.332 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.332 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.332 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:20.332 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.332 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.332 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.332 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.332 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.332 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.332 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.332 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.332 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.332 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.332 "name": "Existed_Raid", 00:18:20.332 "uuid": "b04fff99-cea3-44ea-908f-8daa80175ae8", 00:18:20.332 "strip_size_kb": 0, 00:18:20.332 "state": "configuring", 00:18:20.332 "raid_level": "raid1", 00:18:20.332 "superblock": true, 00:18:20.332 "num_base_bdevs": 2, 00:18:20.332 "num_base_bdevs_discovered": 1, 00:18:20.332 "num_base_bdevs_operational": 2, 00:18:20.332 "base_bdevs_list": [ 00:18:20.332 { 00:18:20.332 "name": "BaseBdev1", 00:18:20.332 "uuid": "9a1a1125-1e2d-4680-93b6-5341ce9500db", 00:18:20.332 "is_configured": true, 00:18:20.332 "data_offset": 256, 00:18:20.332 "data_size": 7936 00:18:20.332 }, 00:18:20.332 { 00:18:20.332 "name": "BaseBdev2", 00:18:20.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.332 "is_configured": false, 00:18:20.332 "data_offset": 0, 00:18:20.332 "data_size": 0 00:18:20.332 } 00:18:20.332 ] 00:18:20.332 }' 00:18:20.332 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.332 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.902 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:20.902 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.902 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.902 [2024-12-09 14:50:58.780542] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:20.902 [2024-12-09 14:50:58.780617] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:20.902 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.902 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:20.902 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.902 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.902 [2024-12-09 14:50:58.792565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:20.902 [2024-12-09 14:50:58.794499] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:20.902 [2024-12-09 14:50:58.794544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:20.902 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.902 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:20.902 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:20.902 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:20.902 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:20.902 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:20.902 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.902 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.902 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:20.902 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.902 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.902 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.902 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.902 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.902 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.902 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.902 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.902 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.902 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.902 "name": "Existed_Raid", 00:18:20.902 "uuid": "630b0855-41bc-42fd-887e-3a72163ca9e9", 00:18:20.902 "strip_size_kb": 0, 00:18:20.902 "state": "configuring", 00:18:20.902 "raid_level": "raid1", 00:18:20.902 "superblock": true, 00:18:20.902 "num_base_bdevs": 2, 00:18:20.902 "num_base_bdevs_discovered": 1, 00:18:20.902 "num_base_bdevs_operational": 2, 00:18:20.902 "base_bdevs_list": [ 00:18:20.902 { 00:18:20.902 "name": "BaseBdev1", 00:18:20.902 "uuid": "9a1a1125-1e2d-4680-93b6-5341ce9500db", 00:18:20.902 "is_configured": true, 00:18:20.902 "data_offset": 256, 00:18:20.902 "data_size": 7936 00:18:20.902 }, 00:18:20.902 { 00:18:20.902 "name": "BaseBdev2", 00:18:20.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.902 "is_configured": false, 00:18:20.902 "data_offset": 0, 00:18:20.902 "data_size": 0 00:18:20.902 } 00:18:20.902 ] 00:18:20.902 }' 00:18:20.902 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.902 14:50:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.161 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:18:21.161 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.161 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.458 [2024-12-09 14:50:59.307793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:21.458 [2024-12-09 14:50:59.308008] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:21.458 [2024-12-09 14:50:59.308023] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:21.458 [2024-12-09 14:50:59.308104] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:21.458 [2024-12-09 14:50:59.308182] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:21.458 [2024-12-09 14:50:59.308192] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:21.458 [2024-12-09 14:50:59.308272] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:21.458 BaseBdev2 00:18:21.458 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.458 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:21.458 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:21.458 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:21.458 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:18:21.458 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:21.458 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:21.458 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:21.458 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.458 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.458 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.458 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:21.458 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.458 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.458 [ 00:18:21.458 { 00:18:21.458 "name": "BaseBdev2", 00:18:21.458 "aliases": [ 00:18:21.458 "042fc348-d7f4-4823-a763-27579e2c81c1" 00:18:21.458 ], 00:18:21.458 "product_name": "Malloc disk", 00:18:21.458 "block_size": 4128, 00:18:21.458 "num_blocks": 8192, 00:18:21.458 "uuid": "042fc348-d7f4-4823-a763-27579e2c81c1", 00:18:21.458 "md_size": 32, 00:18:21.458 "md_interleave": true, 00:18:21.458 "dif_type": 0, 00:18:21.458 "assigned_rate_limits": { 00:18:21.458 "rw_ios_per_sec": 0, 00:18:21.458 "rw_mbytes_per_sec": 0, 00:18:21.458 "r_mbytes_per_sec": 0, 00:18:21.458 "w_mbytes_per_sec": 0 00:18:21.458 }, 00:18:21.458 "claimed": true, 00:18:21.458 "claim_type": "exclusive_write", 00:18:21.458 "zoned": false, 00:18:21.458 "supported_io_types": { 00:18:21.458 "read": true, 00:18:21.458 "write": true, 00:18:21.458 "unmap": true, 00:18:21.458 "flush": true, 00:18:21.458 "reset": true, 00:18:21.458 "nvme_admin": false, 00:18:21.458 "nvme_io": false, 00:18:21.458 "nvme_io_md": false, 00:18:21.458 "write_zeroes": true, 00:18:21.458 "zcopy": true, 00:18:21.458 "get_zone_info": false, 00:18:21.458 "zone_management": false, 00:18:21.458 "zone_append": false, 00:18:21.458 "compare": false, 00:18:21.458 "compare_and_write": false, 00:18:21.458 "abort": true, 00:18:21.458 "seek_hole": false, 00:18:21.458 "seek_data": false, 00:18:21.458 "copy": true, 00:18:21.458 "nvme_iov_md": false 00:18:21.458 }, 00:18:21.458 "memory_domains": [ 00:18:21.458 { 00:18:21.458 "dma_device_id": "system", 00:18:21.458 "dma_device_type": 1 00:18:21.458 }, 00:18:21.458 { 00:18:21.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.458 "dma_device_type": 2 00:18:21.458 } 00:18:21.458 ], 00:18:21.458 "driver_specific": {} 00:18:21.458 } 00:18:21.458 ] 00:18:21.458 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.458 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:18:21.458 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:21.459 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:21.459 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:21.459 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:21.459 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:21.459 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:21.459 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:21.459 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:21.459 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.459 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.459 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.459 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.459 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.459 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:21.459 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.459 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.459 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.459 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.459 "name": "Existed_Raid", 00:18:21.459 "uuid": "630b0855-41bc-42fd-887e-3a72163ca9e9", 00:18:21.459 "strip_size_kb": 0, 00:18:21.459 "state": "online", 00:18:21.459 "raid_level": "raid1", 00:18:21.459 "superblock": true, 00:18:21.459 "num_base_bdevs": 2, 00:18:21.459 "num_base_bdevs_discovered": 2, 00:18:21.459 "num_base_bdevs_operational": 2, 00:18:21.459 "base_bdevs_list": [ 00:18:21.459 { 00:18:21.459 "name": "BaseBdev1", 00:18:21.459 "uuid": "9a1a1125-1e2d-4680-93b6-5341ce9500db", 00:18:21.459 "is_configured": true, 00:18:21.459 "data_offset": 256, 00:18:21.459 "data_size": 7936 00:18:21.459 }, 00:18:21.459 { 00:18:21.459 "name": "BaseBdev2", 00:18:21.459 "uuid": "042fc348-d7f4-4823-a763-27579e2c81c1", 00:18:21.459 "is_configured": true, 00:18:21.459 "data_offset": 256, 00:18:21.459 "data_size": 7936 00:18:21.459 } 00:18:21.459 ] 00:18:21.459 }' 00:18:21.459 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.459 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.731 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:21.731 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:21.732 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:21.732 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:21.732 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:21.732 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:21.732 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:21.732 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:21.732 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.732 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.732 [2024-12-09 14:50:59.819463] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:21.732 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.991 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:21.991 "name": "Existed_Raid", 00:18:21.991 "aliases": [ 00:18:21.991 "630b0855-41bc-42fd-887e-3a72163ca9e9" 00:18:21.991 ], 00:18:21.991 "product_name": "Raid Volume", 00:18:21.991 "block_size": 4128, 00:18:21.991 "num_blocks": 7936, 00:18:21.991 "uuid": "630b0855-41bc-42fd-887e-3a72163ca9e9", 00:18:21.991 "md_size": 32, 00:18:21.991 "md_interleave": true, 00:18:21.991 "dif_type": 0, 00:18:21.991 "assigned_rate_limits": { 00:18:21.991 "rw_ios_per_sec": 0, 00:18:21.991 "rw_mbytes_per_sec": 0, 00:18:21.991 "r_mbytes_per_sec": 0, 00:18:21.991 "w_mbytes_per_sec": 0 00:18:21.991 }, 00:18:21.991 "claimed": false, 00:18:21.991 "zoned": false, 00:18:21.991 "supported_io_types": { 00:18:21.991 "read": true, 00:18:21.991 "write": true, 00:18:21.991 "unmap": false, 00:18:21.991 "flush": false, 00:18:21.991 "reset": true, 00:18:21.991 "nvme_admin": false, 00:18:21.991 "nvme_io": false, 00:18:21.991 "nvme_io_md": false, 00:18:21.991 "write_zeroes": true, 00:18:21.991 "zcopy": false, 00:18:21.991 "get_zone_info": false, 00:18:21.991 "zone_management": false, 00:18:21.991 "zone_append": false, 00:18:21.991 "compare": false, 00:18:21.991 "compare_and_write": false, 00:18:21.991 "abort": false, 00:18:21.991 "seek_hole": false, 00:18:21.991 "seek_data": false, 00:18:21.991 "copy": false, 00:18:21.991 "nvme_iov_md": false 00:18:21.991 }, 00:18:21.991 "memory_domains": [ 00:18:21.991 { 00:18:21.991 "dma_device_id": "system", 00:18:21.991 "dma_device_type": 1 00:18:21.991 }, 00:18:21.991 { 00:18:21.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.991 "dma_device_type": 2 00:18:21.991 }, 00:18:21.991 { 00:18:21.991 "dma_device_id": "system", 00:18:21.991 "dma_device_type": 1 00:18:21.991 }, 00:18:21.991 { 00:18:21.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.991 "dma_device_type": 2 00:18:21.991 } 00:18:21.991 ], 00:18:21.991 "driver_specific": { 00:18:21.991 "raid": { 00:18:21.991 "uuid": "630b0855-41bc-42fd-887e-3a72163ca9e9", 00:18:21.991 "strip_size_kb": 0, 00:18:21.991 "state": "online", 00:18:21.991 "raid_level": "raid1", 00:18:21.991 "superblock": true, 00:18:21.991 "num_base_bdevs": 2, 00:18:21.991 "num_base_bdevs_discovered": 2, 00:18:21.991 "num_base_bdevs_operational": 2, 00:18:21.991 "base_bdevs_list": [ 00:18:21.991 { 00:18:21.991 "name": "BaseBdev1", 00:18:21.991 "uuid": "9a1a1125-1e2d-4680-93b6-5341ce9500db", 00:18:21.991 "is_configured": true, 00:18:21.991 "data_offset": 256, 00:18:21.991 "data_size": 7936 00:18:21.991 }, 00:18:21.991 { 00:18:21.991 "name": "BaseBdev2", 00:18:21.991 "uuid": "042fc348-d7f4-4823-a763-27579e2c81c1", 00:18:21.991 "is_configured": true, 00:18:21.991 "data_offset": 256, 00:18:21.991 "data_size": 7936 00:18:21.991 } 00:18:21.991 ] 00:18:21.991 } 00:18:21.991 } 00:18:21.991 }' 00:18:21.991 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:21.991 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:21.991 BaseBdev2' 00:18:21.991 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:21.991 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:21.991 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:21.991 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:21.991 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:21.991 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.991 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.991 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.991 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:21.991 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:21.992 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:21.992 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:21.992 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:21.992 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.992 14:50:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.992 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.992 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:21.992 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:21.992 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:21.992 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.992 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.992 [2024-12-09 14:51:00.034793] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:22.251 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.251 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:22.251 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:22.251 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:22.251 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:22.251 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:22.251 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:22.251 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:22.251 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:22.251 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:22.251 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:22.251 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:22.251 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.251 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.251 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.251 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.251 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.251 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.251 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.251 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.251 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.251 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.251 "name": "Existed_Raid", 00:18:22.251 "uuid": "630b0855-41bc-42fd-887e-3a72163ca9e9", 00:18:22.251 "strip_size_kb": 0, 00:18:22.251 "state": "online", 00:18:22.251 "raid_level": "raid1", 00:18:22.251 "superblock": true, 00:18:22.251 "num_base_bdevs": 2, 00:18:22.251 "num_base_bdevs_discovered": 1, 00:18:22.251 "num_base_bdevs_operational": 1, 00:18:22.251 "base_bdevs_list": [ 00:18:22.251 { 00:18:22.251 "name": null, 00:18:22.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.251 "is_configured": false, 00:18:22.251 "data_offset": 0, 00:18:22.251 "data_size": 7936 00:18:22.251 }, 00:18:22.251 { 00:18:22.251 "name": "BaseBdev2", 00:18:22.251 "uuid": "042fc348-d7f4-4823-a763-27579e2c81c1", 00:18:22.251 "is_configured": true, 00:18:22.251 "data_offset": 256, 00:18:22.251 "data_size": 7936 00:18:22.251 } 00:18:22.251 ] 00:18:22.251 }' 00:18:22.251 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.251 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.511 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:22.511 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:22.511 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.511 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.511 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:22.511 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.511 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.511 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:22.511 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:22.511 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:22.511 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.511 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.511 [2024-12-09 14:51:00.617763] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:22.511 [2024-12-09 14:51:00.617896] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:22.770 [2024-12-09 14:51:00.711749] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:22.770 [2024-12-09 14:51:00.711797] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:22.770 [2024-12-09 14:51:00.711810] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:22.770 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.770 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:22.770 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:22.770 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.770 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:22.770 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.770 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.770 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.770 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:22.770 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:22.770 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:22.770 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 89780 00:18:22.770 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89780 ']' 00:18:22.770 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89780 00:18:22.770 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:22.770 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:22.770 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89780 00:18:22.770 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:22.770 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:22.770 killing process with pid 89780 00:18:22.770 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89780' 00:18:22.770 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89780 00:18:22.770 [2024-12-09 14:51:00.807817] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:22.770 14:51:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89780 00:18:22.770 [2024-12-09 14:51:00.824697] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:24.149 14:51:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:18:24.149 00:18:24.149 real 0m5.124s 00:18:24.149 user 0m7.452s 00:18:24.149 sys 0m0.875s 00:18:24.149 14:51:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:24.149 14:51:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.149 ************************************ 00:18:24.149 END TEST raid_state_function_test_sb_md_interleaved 00:18:24.149 ************************************ 00:18:24.149 14:51:01 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:18:24.149 14:51:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:24.149 14:51:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:24.149 14:51:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:24.149 ************************************ 00:18:24.149 START TEST raid_superblock_test_md_interleaved 00:18:24.149 ************************************ 00:18:24.149 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:24.149 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:24.149 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:24.149 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:24.149 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:24.149 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:24.149 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:24.149 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:24.149 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:24.149 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:24.149 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:24.149 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:24.149 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:24.150 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:24.150 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:24.150 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:24.150 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=90028 00:18:24.150 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:24.150 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 90028 00:18:24.150 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 90028 ']' 00:18:24.150 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.150 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:24.150 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.150 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:24.150 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.150 [2024-12-09 14:51:02.105436] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:18:24.150 [2024-12-09 14:51:02.105650] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90028 ] 00:18:24.409 [2024-12-09 14:51:02.278494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.409 [2024-12-09 14:51:02.393518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.667 [2024-12-09 14:51:02.591592] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:24.667 [2024-12-09 14:51:02.591660] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:24.926 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:24.926 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:24.926 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:24.926 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:24.926 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:24.927 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:24.927 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:24.927 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:24.927 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:24.927 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:24.927 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:18:24.927 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.927 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.927 malloc1 00:18:24.927 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.927 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:24.927 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.927 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.927 [2024-12-09 14:51:02.987235] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:24.927 [2024-12-09 14:51:02.987383] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:24.927 [2024-12-09 14:51:02.987429] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:24.927 [2024-12-09 14:51:02.987467] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:24.927 [2024-12-09 14:51:02.989363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:24.927 [2024-12-09 14:51:02.989433] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:24.927 pt1 00:18:24.927 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.927 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:24.927 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:24.927 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:24.927 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:24.927 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:24.927 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:24.927 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:24.927 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:24.927 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:18:24.927 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.927 14:51:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.927 malloc2 00:18:24.927 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.927 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:24.927 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.927 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.927 [2024-12-09 14:51:03.046711] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:24.927 [2024-12-09 14:51:03.046817] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:24.927 [2024-12-09 14:51:03.046865] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:24.927 [2024-12-09 14:51:03.046895] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:25.186 [2024-12-09 14:51:03.048904] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:25.186 [2024-12-09 14:51:03.048974] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:25.186 pt2 00:18:25.186 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.186 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:25.186 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:25.186 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:25.186 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.186 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.186 [2024-12-09 14:51:03.058735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:25.187 [2024-12-09 14:51:03.060600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:25.187 [2024-12-09 14:51:03.060823] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:25.187 [2024-12-09 14:51:03.060868] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:25.187 [2024-12-09 14:51:03.060968] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:25.187 [2024-12-09 14:51:03.061070] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:25.187 [2024-12-09 14:51:03.061110] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:25.187 [2024-12-09 14:51:03.061216] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:25.187 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.187 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:25.187 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:25.187 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:25.187 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:25.187 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:25.187 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:25.187 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.187 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.187 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.187 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.187 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.187 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.187 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.187 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.187 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.187 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.187 "name": "raid_bdev1", 00:18:25.187 "uuid": "2e527c1d-df68-4ace-af7a-3d4bb446a5fa", 00:18:25.187 "strip_size_kb": 0, 00:18:25.187 "state": "online", 00:18:25.187 "raid_level": "raid1", 00:18:25.187 "superblock": true, 00:18:25.187 "num_base_bdevs": 2, 00:18:25.187 "num_base_bdevs_discovered": 2, 00:18:25.187 "num_base_bdevs_operational": 2, 00:18:25.187 "base_bdevs_list": [ 00:18:25.187 { 00:18:25.187 "name": "pt1", 00:18:25.187 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:25.187 "is_configured": true, 00:18:25.187 "data_offset": 256, 00:18:25.187 "data_size": 7936 00:18:25.187 }, 00:18:25.187 { 00:18:25.187 "name": "pt2", 00:18:25.187 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:25.187 "is_configured": true, 00:18:25.187 "data_offset": 256, 00:18:25.187 "data_size": 7936 00:18:25.187 } 00:18:25.187 ] 00:18:25.187 }' 00:18:25.187 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.187 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.447 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:25.447 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:25.447 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:25.447 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:25.447 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:25.447 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:25.447 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:25.447 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:25.447 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.447 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.447 [2024-12-09 14:51:03.550159] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:25.706 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.706 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:25.706 "name": "raid_bdev1", 00:18:25.706 "aliases": [ 00:18:25.706 "2e527c1d-df68-4ace-af7a-3d4bb446a5fa" 00:18:25.706 ], 00:18:25.706 "product_name": "Raid Volume", 00:18:25.706 "block_size": 4128, 00:18:25.706 "num_blocks": 7936, 00:18:25.706 "uuid": "2e527c1d-df68-4ace-af7a-3d4bb446a5fa", 00:18:25.706 "md_size": 32, 00:18:25.706 "md_interleave": true, 00:18:25.706 "dif_type": 0, 00:18:25.706 "assigned_rate_limits": { 00:18:25.706 "rw_ios_per_sec": 0, 00:18:25.706 "rw_mbytes_per_sec": 0, 00:18:25.706 "r_mbytes_per_sec": 0, 00:18:25.706 "w_mbytes_per_sec": 0 00:18:25.706 }, 00:18:25.706 "claimed": false, 00:18:25.706 "zoned": false, 00:18:25.706 "supported_io_types": { 00:18:25.706 "read": true, 00:18:25.706 "write": true, 00:18:25.706 "unmap": false, 00:18:25.706 "flush": false, 00:18:25.706 "reset": true, 00:18:25.706 "nvme_admin": false, 00:18:25.706 "nvme_io": false, 00:18:25.706 "nvme_io_md": false, 00:18:25.706 "write_zeroes": true, 00:18:25.706 "zcopy": false, 00:18:25.706 "get_zone_info": false, 00:18:25.706 "zone_management": false, 00:18:25.706 "zone_append": false, 00:18:25.706 "compare": false, 00:18:25.706 "compare_and_write": false, 00:18:25.706 "abort": false, 00:18:25.706 "seek_hole": false, 00:18:25.706 "seek_data": false, 00:18:25.706 "copy": false, 00:18:25.706 "nvme_iov_md": false 00:18:25.706 }, 00:18:25.706 "memory_domains": [ 00:18:25.706 { 00:18:25.706 "dma_device_id": "system", 00:18:25.706 "dma_device_type": 1 00:18:25.706 }, 00:18:25.706 { 00:18:25.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.706 "dma_device_type": 2 00:18:25.706 }, 00:18:25.706 { 00:18:25.706 "dma_device_id": "system", 00:18:25.706 "dma_device_type": 1 00:18:25.706 }, 00:18:25.706 { 00:18:25.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.706 "dma_device_type": 2 00:18:25.706 } 00:18:25.706 ], 00:18:25.706 "driver_specific": { 00:18:25.706 "raid": { 00:18:25.706 "uuid": "2e527c1d-df68-4ace-af7a-3d4bb446a5fa", 00:18:25.707 "strip_size_kb": 0, 00:18:25.707 "state": "online", 00:18:25.707 "raid_level": "raid1", 00:18:25.707 "superblock": true, 00:18:25.707 "num_base_bdevs": 2, 00:18:25.707 "num_base_bdevs_discovered": 2, 00:18:25.707 "num_base_bdevs_operational": 2, 00:18:25.707 "base_bdevs_list": [ 00:18:25.707 { 00:18:25.707 "name": "pt1", 00:18:25.707 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:25.707 "is_configured": true, 00:18:25.707 "data_offset": 256, 00:18:25.707 "data_size": 7936 00:18:25.707 }, 00:18:25.707 { 00:18:25.707 "name": "pt2", 00:18:25.707 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:25.707 "is_configured": true, 00:18:25.707 "data_offset": 256, 00:18:25.707 "data_size": 7936 00:18:25.707 } 00:18:25.707 ] 00:18:25.707 } 00:18:25.707 } 00:18:25.707 }' 00:18:25.707 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:25.707 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:25.707 pt2' 00:18:25.707 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:25.707 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:25.707 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:25.707 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:25.707 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:25.707 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.707 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.707 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.707 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:25.707 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:25.707 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:25.707 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:25.707 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.707 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:25.707 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.707 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.707 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:25.707 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:25.707 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:25.707 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.707 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.707 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:25.707 [2024-12-09 14:51:03.773779] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:25.707 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.707 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2e527c1d-df68-4ace-af7a-3d4bb446a5fa 00:18:25.707 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 2e527c1d-df68-4ace-af7a-3d4bb446a5fa ']' 00:18:25.707 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:25.707 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.707 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.707 [2024-12-09 14:51:03.817380] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:25.707 [2024-12-09 14:51:03.817408] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:25.707 [2024-12-09 14:51:03.817496] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:25.707 [2024-12-09 14:51:03.817554] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:25.707 [2024-12-09 14:51:03.817566] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:25.707 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.966 [2024-12-09 14:51:03.961171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:25.966 [2024-12-09 14:51:03.963127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:25.966 [2024-12-09 14:51:03.963204] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:25.966 [2024-12-09 14:51:03.963281] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:25.966 [2024-12-09 14:51:03.963297] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:25.966 [2024-12-09 14:51:03.963307] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:25.966 request: 00:18:25.966 { 00:18:25.966 "name": "raid_bdev1", 00:18:25.966 "raid_level": "raid1", 00:18:25.966 "base_bdevs": [ 00:18:25.966 "malloc1", 00:18:25.966 "malloc2" 00:18:25.966 ], 00:18:25.966 "superblock": false, 00:18:25.966 "method": "bdev_raid_create", 00:18:25.966 "req_id": 1 00:18:25.966 } 00:18:25.966 Got JSON-RPC error response 00:18:25.966 response: 00:18:25.966 { 00:18:25.966 "code": -17, 00:18:25.966 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:25.966 } 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.966 14:51:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.966 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:25.966 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:25.967 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:25.967 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.967 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.967 [2024-12-09 14:51:04.029013] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:25.967 [2024-12-09 14:51:04.029113] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:25.967 [2024-12-09 14:51:04.029165] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:25.967 [2024-12-09 14:51:04.029195] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:25.967 [2024-12-09 14:51:04.031114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:25.967 [2024-12-09 14:51:04.031188] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:25.967 [2024-12-09 14:51:04.031279] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:25.967 [2024-12-09 14:51:04.031381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:25.967 pt1 00:18:25.967 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.967 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:25.967 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:25.967 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:25.967 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:25.967 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:25.967 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:25.967 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.967 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.967 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.967 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.967 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.967 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.967 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.967 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.967 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.226 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.226 "name": "raid_bdev1", 00:18:26.226 "uuid": "2e527c1d-df68-4ace-af7a-3d4bb446a5fa", 00:18:26.226 "strip_size_kb": 0, 00:18:26.226 "state": "configuring", 00:18:26.226 "raid_level": "raid1", 00:18:26.226 "superblock": true, 00:18:26.226 "num_base_bdevs": 2, 00:18:26.226 "num_base_bdevs_discovered": 1, 00:18:26.226 "num_base_bdevs_operational": 2, 00:18:26.226 "base_bdevs_list": [ 00:18:26.226 { 00:18:26.226 "name": "pt1", 00:18:26.226 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:26.226 "is_configured": true, 00:18:26.226 "data_offset": 256, 00:18:26.226 "data_size": 7936 00:18:26.226 }, 00:18:26.226 { 00:18:26.226 "name": null, 00:18:26.226 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:26.226 "is_configured": false, 00:18:26.226 "data_offset": 256, 00:18:26.226 "data_size": 7936 00:18:26.226 } 00:18:26.226 ] 00:18:26.226 }' 00:18:26.226 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.226 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.486 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:26.486 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:26.486 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:26.486 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:26.486 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.486 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.486 [2024-12-09 14:51:04.492239] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:26.486 [2024-12-09 14:51:04.492323] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.486 [2024-12-09 14:51:04.492344] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:26.486 [2024-12-09 14:51:04.492355] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.486 [2024-12-09 14:51:04.492534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.486 [2024-12-09 14:51:04.492551] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:26.486 [2024-12-09 14:51:04.492624] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:26.486 [2024-12-09 14:51:04.492651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:26.486 [2024-12-09 14:51:04.492738] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:26.486 [2024-12-09 14:51:04.492749] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:26.486 [2024-12-09 14:51:04.492826] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:26.486 [2024-12-09 14:51:04.492900] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:26.486 [2024-12-09 14:51:04.492908] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:26.486 [2024-12-09 14:51:04.492972] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:26.486 pt2 00:18:26.486 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.486 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:26.486 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:26.486 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:26.486 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:26.486 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:26.486 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:26.486 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:26.486 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:26.486 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.486 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.486 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.486 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.486 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.486 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.486 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.486 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.486 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.486 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.486 "name": "raid_bdev1", 00:18:26.486 "uuid": "2e527c1d-df68-4ace-af7a-3d4bb446a5fa", 00:18:26.486 "strip_size_kb": 0, 00:18:26.486 "state": "online", 00:18:26.486 "raid_level": "raid1", 00:18:26.486 "superblock": true, 00:18:26.486 "num_base_bdevs": 2, 00:18:26.486 "num_base_bdevs_discovered": 2, 00:18:26.486 "num_base_bdevs_operational": 2, 00:18:26.486 "base_bdevs_list": [ 00:18:26.486 { 00:18:26.486 "name": "pt1", 00:18:26.486 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:26.486 "is_configured": true, 00:18:26.486 "data_offset": 256, 00:18:26.486 "data_size": 7936 00:18:26.486 }, 00:18:26.486 { 00:18:26.486 "name": "pt2", 00:18:26.486 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:26.486 "is_configured": true, 00:18:26.486 "data_offset": 256, 00:18:26.486 "data_size": 7936 00:18:26.486 } 00:18:26.486 ] 00:18:26.486 }' 00:18:26.486 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.486 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.055 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:27.055 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:27.055 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:27.055 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:27.055 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:27.055 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:27.055 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:27.055 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:27.055 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.055 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.055 [2024-12-09 14:51:04.971711] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:27.055 14:51:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.055 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:27.055 "name": "raid_bdev1", 00:18:27.055 "aliases": [ 00:18:27.055 "2e527c1d-df68-4ace-af7a-3d4bb446a5fa" 00:18:27.055 ], 00:18:27.055 "product_name": "Raid Volume", 00:18:27.055 "block_size": 4128, 00:18:27.055 "num_blocks": 7936, 00:18:27.055 "uuid": "2e527c1d-df68-4ace-af7a-3d4bb446a5fa", 00:18:27.055 "md_size": 32, 00:18:27.055 "md_interleave": true, 00:18:27.056 "dif_type": 0, 00:18:27.056 "assigned_rate_limits": { 00:18:27.056 "rw_ios_per_sec": 0, 00:18:27.056 "rw_mbytes_per_sec": 0, 00:18:27.056 "r_mbytes_per_sec": 0, 00:18:27.056 "w_mbytes_per_sec": 0 00:18:27.056 }, 00:18:27.056 "claimed": false, 00:18:27.056 "zoned": false, 00:18:27.056 "supported_io_types": { 00:18:27.056 "read": true, 00:18:27.056 "write": true, 00:18:27.056 "unmap": false, 00:18:27.056 "flush": false, 00:18:27.056 "reset": true, 00:18:27.056 "nvme_admin": false, 00:18:27.056 "nvme_io": false, 00:18:27.056 "nvme_io_md": false, 00:18:27.056 "write_zeroes": true, 00:18:27.056 "zcopy": false, 00:18:27.056 "get_zone_info": false, 00:18:27.056 "zone_management": false, 00:18:27.056 "zone_append": false, 00:18:27.056 "compare": false, 00:18:27.056 "compare_and_write": false, 00:18:27.056 "abort": false, 00:18:27.056 "seek_hole": false, 00:18:27.056 "seek_data": false, 00:18:27.056 "copy": false, 00:18:27.056 "nvme_iov_md": false 00:18:27.056 }, 00:18:27.056 "memory_domains": [ 00:18:27.056 { 00:18:27.056 "dma_device_id": "system", 00:18:27.056 "dma_device_type": 1 00:18:27.056 }, 00:18:27.056 { 00:18:27.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.056 "dma_device_type": 2 00:18:27.056 }, 00:18:27.056 { 00:18:27.056 "dma_device_id": "system", 00:18:27.056 "dma_device_type": 1 00:18:27.056 }, 00:18:27.056 { 00:18:27.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.056 "dma_device_type": 2 00:18:27.056 } 00:18:27.056 ], 00:18:27.056 "driver_specific": { 00:18:27.056 "raid": { 00:18:27.056 "uuid": "2e527c1d-df68-4ace-af7a-3d4bb446a5fa", 00:18:27.056 "strip_size_kb": 0, 00:18:27.056 "state": "online", 00:18:27.056 "raid_level": "raid1", 00:18:27.056 "superblock": true, 00:18:27.056 "num_base_bdevs": 2, 00:18:27.056 "num_base_bdevs_discovered": 2, 00:18:27.056 "num_base_bdevs_operational": 2, 00:18:27.056 "base_bdevs_list": [ 00:18:27.056 { 00:18:27.056 "name": "pt1", 00:18:27.056 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:27.056 "is_configured": true, 00:18:27.056 "data_offset": 256, 00:18:27.056 "data_size": 7936 00:18:27.056 }, 00:18:27.056 { 00:18:27.056 "name": "pt2", 00:18:27.056 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:27.056 "is_configured": true, 00:18:27.056 "data_offset": 256, 00:18:27.056 "data_size": 7936 00:18:27.056 } 00:18:27.056 ] 00:18:27.056 } 00:18:27.056 } 00:18:27.056 }' 00:18:27.056 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:27.056 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:27.056 pt2' 00:18:27.056 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:27.056 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:27.056 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:27.056 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:27.056 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:27.056 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.056 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.056 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.056 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:27.056 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:27.056 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:27.056 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:27.056 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:27.056 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.056 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.056 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.315 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:27.315 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:27.315 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:27.315 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:27.315 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.315 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.315 [2024-12-09 14:51:05.203400] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:27.315 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.315 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 2e527c1d-df68-4ace-af7a-3d4bb446a5fa '!=' 2e527c1d-df68-4ace-af7a-3d4bb446a5fa ']' 00:18:27.316 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:27.316 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:27.316 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:27.316 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:27.316 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.316 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.316 [2024-12-09 14:51:05.247035] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:27.316 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.316 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:27.316 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:27.316 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.316 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:27.316 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:27.316 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:27.316 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.316 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.316 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.316 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.316 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.316 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.316 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.316 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.316 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.316 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.316 "name": "raid_bdev1", 00:18:27.316 "uuid": "2e527c1d-df68-4ace-af7a-3d4bb446a5fa", 00:18:27.316 "strip_size_kb": 0, 00:18:27.316 "state": "online", 00:18:27.316 "raid_level": "raid1", 00:18:27.316 "superblock": true, 00:18:27.316 "num_base_bdevs": 2, 00:18:27.316 "num_base_bdevs_discovered": 1, 00:18:27.316 "num_base_bdevs_operational": 1, 00:18:27.316 "base_bdevs_list": [ 00:18:27.316 { 00:18:27.316 "name": null, 00:18:27.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.316 "is_configured": false, 00:18:27.316 "data_offset": 0, 00:18:27.316 "data_size": 7936 00:18:27.316 }, 00:18:27.316 { 00:18:27.316 "name": "pt2", 00:18:27.316 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:27.316 "is_configured": true, 00:18:27.316 "data_offset": 256, 00:18:27.316 "data_size": 7936 00:18:27.316 } 00:18:27.316 ] 00:18:27.316 }' 00:18:27.316 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.316 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.575 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:27.575 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.575 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.575 [2024-12-09 14:51:05.686247] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:27.575 [2024-12-09 14:51:05.686325] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:27.575 [2024-12-09 14:51:05.686425] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:27.575 [2024-12-09 14:51:05.686501] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:27.575 [2024-12-09 14:51:05.686549] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:27.575 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.575 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.575 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:27.575 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.575 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.835 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.835 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:27.835 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:27.835 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:27.835 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:27.835 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:27.835 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.835 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.835 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.835 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:27.835 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:27.835 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:27.835 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:27.835 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:18:27.835 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:27.835 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.835 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.835 [2024-12-09 14:51:05.758103] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:27.835 [2024-12-09 14:51:05.758155] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.835 [2024-12-09 14:51:05.758171] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:27.835 [2024-12-09 14:51:05.758181] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.835 [2024-12-09 14:51:05.760076] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.835 [2024-12-09 14:51:05.760155] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:27.835 [2024-12-09 14:51:05.760216] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:27.835 [2024-12-09 14:51:05.760276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:27.835 [2024-12-09 14:51:05.760347] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:27.835 [2024-12-09 14:51:05.760359] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:27.835 [2024-12-09 14:51:05.760446] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:27.835 [2024-12-09 14:51:05.760515] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:27.835 [2024-12-09 14:51:05.760523] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:27.835 [2024-12-09 14:51:05.760604] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:27.835 pt2 00:18:27.835 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.835 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:27.835 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:27.835 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.835 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:27.835 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:27.835 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:27.835 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.835 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.835 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.835 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.835 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.835 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.835 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.835 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.835 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.835 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.835 "name": "raid_bdev1", 00:18:27.835 "uuid": "2e527c1d-df68-4ace-af7a-3d4bb446a5fa", 00:18:27.835 "strip_size_kb": 0, 00:18:27.835 "state": "online", 00:18:27.835 "raid_level": "raid1", 00:18:27.835 "superblock": true, 00:18:27.835 "num_base_bdevs": 2, 00:18:27.835 "num_base_bdevs_discovered": 1, 00:18:27.835 "num_base_bdevs_operational": 1, 00:18:27.835 "base_bdevs_list": [ 00:18:27.835 { 00:18:27.835 "name": null, 00:18:27.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.835 "is_configured": false, 00:18:27.835 "data_offset": 256, 00:18:27.835 "data_size": 7936 00:18:27.835 }, 00:18:27.835 { 00:18:27.835 "name": "pt2", 00:18:27.835 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:27.835 "is_configured": true, 00:18:27.835 "data_offset": 256, 00:18:27.835 "data_size": 7936 00:18:27.835 } 00:18:27.835 ] 00:18:27.835 }' 00:18:27.835 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.835 14:51:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.094 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:28.094 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.095 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.354 [2024-12-09 14:51:06.217339] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:28.354 [2024-12-09 14:51:06.217417] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:28.354 [2024-12-09 14:51:06.217534] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:28.355 [2024-12-09 14:51:06.217628] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:28.355 [2024-12-09 14:51:06.217680] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:28.355 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.355 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.355 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:28.355 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.355 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.355 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.355 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:28.355 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:28.355 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:28.355 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:28.355 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.355 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.355 [2024-12-09 14:51:06.277241] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:28.355 [2024-12-09 14:51:06.277335] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.355 [2024-12-09 14:51:06.277371] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:28.355 [2024-12-09 14:51:06.277398] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.355 [2024-12-09 14:51:06.279362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.355 [2024-12-09 14:51:06.279430] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:28.355 [2024-12-09 14:51:06.279489] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:28.355 [2024-12-09 14:51:06.279545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:28.355 [2024-12-09 14:51:06.279661] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:28.355 [2024-12-09 14:51:06.279671] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:28.355 [2024-12-09 14:51:06.279688] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:28.355 [2024-12-09 14:51:06.279750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:28.355 [2024-12-09 14:51:06.279824] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:28.355 [2024-12-09 14:51:06.279833] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:28.355 [2024-12-09 14:51:06.279904] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:28.355 [2024-12-09 14:51:06.279966] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:28.355 [2024-12-09 14:51:06.279975] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:28.355 [2024-12-09 14:51:06.280044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.355 pt1 00:18:28.355 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.355 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:28.355 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:28.355 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:28.355 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:28.355 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:28.355 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:28.355 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:28.355 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.355 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.355 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.355 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.355 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.355 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.355 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.355 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.355 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.355 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.355 "name": "raid_bdev1", 00:18:28.355 "uuid": "2e527c1d-df68-4ace-af7a-3d4bb446a5fa", 00:18:28.355 "strip_size_kb": 0, 00:18:28.355 "state": "online", 00:18:28.355 "raid_level": "raid1", 00:18:28.355 "superblock": true, 00:18:28.355 "num_base_bdevs": 2, 00:18:28.355 "num_base_bdevs_discovered": 1, 00:18:28.355 "num_base_bdevs_operational": 1, 00:18:28.355 "base_bdevs_list": [ 00:18:28.355 { 00:18:28.355 "name": null, 00:18:28.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.355 "is_configured": false, 00:18:28.355 "data_offset": 256, 00:18:28.355 "data_size": 7936 00:18:28.355 }, 00:18:28.355 { 00:18:28.355 "name": "pt2", 00:18:28.355 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:28.355 "is_configured": true, 00:18:28.355 "data_offset": 256, 00:18:28.355 "data_size": 7936 00:18:28.355 } 00:18:28.355 ] 00:18:28.355 }' 00:18:28.355 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.355 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.614 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:28.614 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:28.614 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.614 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.614 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.615 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:28.615 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:28.615 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:28.615 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.615 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.615 [2024-12-09 14:51:06.720702] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:28.874 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.874 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 2e527c1d-df68-4ace-af7a-3d4bb446a5fa '!=' 2e527c1d-df68-4ace-af7a-3d4bb446a5fa ']' 00:18:28.874 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 90028 00:18:28.874 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 90028 ']' 00:18:28.874 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 90028 00:18:28.874 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:28.874 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:28.874 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90028 00:18:28.874 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:28.874 killing process with pid 90028 00:18:28.874 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:28.874 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90028' 00:18:28.874 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 90028 00:18:28.874 [2024-12-09 14:51:06.796207] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:28.874 [2024-12-09 14:51:06.796296] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:28.874 [2024-12-09 14:51:06.796342] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:28.874 [2024-12-09 14:51:06.796356] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:28.874 14:51:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 90028 00:18:29.133 [2024-12-09 14:51:07.006527] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:30.072 ************************************ 00:18:30.072 END TEST raid_superblock_test_md_interleaved 00:18:30.072 ************************************ 00:18:30.072 14:51:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:18:30.072 00:18:30.072 real 0m6.096s 00:18:30.072 user 0m9.291s 00:18:30.072 sys 0m1.067s 00:18:30.072 14:51:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:30.072 14:51:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.072 14:51:08 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:18:30.072 14:51:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:30.072 14:51:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:30.072 14:51:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:30.072 ************************************ 00:18:30.072 START TEST raid_rebuild_test_sb_md_interleaved 00:18:30.072 ************************************ 00:18:30.072 14:51:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:18:30.072 14:51:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:30.072 14:51:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:30.072 14:51:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:30.072 14:51:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:30.072 14:51:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:18:30.072 14:51:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:30.072 14:51:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:30.072 14:51:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:30.072 14:51:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:30.072 14:51:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:30.072 14:51:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:30.072 14:51:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:30.072 14:51:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:30.072 14:51:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:30.072 14:51:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:30.072 14:51:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:30.072 14:51:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:30.072 14:51:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:30.072 14:51:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:30.072 14:51:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:30.072 14:51:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:30.072 14:51:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:30.072 14:51:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:30.072 14:51:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:30.072 14:51:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=90356 00:18:30.072 14:51:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:30.072 14:51:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 90356 00:18:30.332 14:51:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 90356 ']' 00:18:30.332 14:51:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.332 14:51:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:30.332 14:51:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:30.332 14:51:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:30.332 14:51:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.332 [2024-12-09 14:51:08.274515] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:18:30.332 [2024-12-09 14:51:08.274732] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90356 ] 00:18:30.332 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:30.332 Zero copy mechanism will not be used. 00:18:30.332 [2024-12-09 14:51:08.449813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.591 [2024-12-09 14:51:08.561450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.851 [2024-12-09 14:51:08.761878] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:30.851 [2024-12-09 14:51:08.762019] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:31.111 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:31.111 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:31.111 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:31.111 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:18:31.111 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.111 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.111 BaseBdev1_malloc 00:18:31.111 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.111 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:31.111 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.111 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.111 [2024-12-09 14:51:09.147775] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:31.111 [2024-12-09 14:51:09.147838] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.111 [2024-12-09 14:51:09.147862] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:31.111 [2024-12-09 14:51:09.147875] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.111 [2024-12-09 14:51:09.149806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.111 [2024-12-09 14:51:09.149845] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:31.111 BaseBdev1 00:18:31.111 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.111 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:31.111 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:18:31.111 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.111 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.111 BaseBdev2_malloc 00:18:31.111 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.111 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:31.111 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.111 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.111 [2024-12-09 14:51:09.202889] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:31.111 [2024-12-09 14:51:09.202944] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.111 [2024-12-09 14:51:09.202963] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:31.111 [2024-12-09 14:51:09.202975] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.111 [2024-12-09 14:51:09.204819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.111 [2024-12-09 14:51:09.204915] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:31.111 BaseBdev2 00:18:31.111 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.111 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:18:31.111 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.111 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.372 spare_malloc 00:18:31.372 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.372 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:31.372 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.372 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.372 spare_delay 00:18:31.372 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.372 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:31.372 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.372 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.372 [2024-12-09 14:51:09.284416] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:31.372 [2024-12-09 14:51:09.284504] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.372 [2024-12-09 14:51:09.284525] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:31.372 [2024-12-09 14:51:09.284536] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.372 [2024-12-09 14:51:09.286438] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.372 [2024-12-09 14:51:09.286530] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:31.372 spare 00:18:31.372 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.372 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:31.372 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.372 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.372 [2024-12-09 14:51:09.296432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:31.372 [2024-12-09 14:51:09.298245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:31.372 [2024-12-09 14:51:09.298427] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:31.372 [2024-12-09 14:51:09.298442] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:31.372 [2024-12-09 14:51:09.298510] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:31.372 [2024-12-09 14:51:09.298592] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:31.372 [2024-12-09 14:51:09.298600] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:31.372 [2024-12-09 14:51:09.298667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:31.372 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.372 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:31.372 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.372 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.372 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.372 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.372 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:31.372 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.372 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.372 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.372 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.372 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.372 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.372 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.372 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.372 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.372 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.372 "name": "raid_bdev1", 00:18:31.372 "uuid": "ab6ad454-0abb-44dc-b5a4-01daaa4e0846", 00:18:31.372 "strip_size_kb": 0, 00:18:31.372 "state": "online", 00:18:31.372 "raid_level": "raid1", 00:18:31.372 "superblock": true, 00:18:31.372 "num_base_bdevs": 2, 00:18:31.372 "num_base_bdevs_discovered": 2, 00:18:31.372 "num_base_bdevs_operational": 2, 00:18:31.372 "base_bdevs_list": [ 00:18:31.372 { 00:18:31.372 "name": "BaseBdev1", 00:18:31.372 "uuid": "c270e2fa-183f-5b18-bd43-6f09f7887f8d", 00:18:31.372 "is_configured": true, 00:18:31.372 "data_offset": 256, 00:18:31.372 "data_size": 7936 00:18:31.372 }, 00:18:31.372 { 00:18:31.372 "name": "BaseBdev2", 00:18:31.372 "uuid": "87866dc7-d31c-59a8-8447-39674bdbf6ea", 00:18:31.372 "is_configured": true, 00:18:31.372 "data_offset": 256, 00:18:31.372 "data_size": 7936 00:18:31.372 } 00:18:31.372 ] 00:18:31.372 }' 00:18:31.372 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.372 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.642 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:31.642 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:31.642 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.642 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.642 [2024-12-09 14:51:09.720022] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:31.642 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.918 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:31.918 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.918 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.918 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.918 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:31.918 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.918 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:31.918 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:31.918 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:18:31.918 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:31.918 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.918 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.918 [2024-12-09 14:51:09.819536] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:31.918 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.918 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:31.918 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.918 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.918 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.918 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.918 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:31.918 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.918 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.918 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.918 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.918 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.918 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.918 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.918 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.918 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.918 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.918 "name": "raid_bdev1", 00:18:31.918 "uuid": "ab6ad454-0abb-44dc-b5a4-01daaa4e0846", 00:18:31.918 "strip_size_kb": 0, 00:18:31.918 "state": "online", 00:18:31.918 "raid_level": "raid1", 00:18:31.918 "superblock": true, 00:18:31.918 "num_base_bdevs": 2, 00:18:31.918 "num_base_bdevs_discovered": 1, 00:18:31.918 "num_base_bdevs_operational": 1, 00:18:31.918 "base_bdevs_list": [ 00:18:31.918 { 00:18:31.918 "name": null, 00:18:31.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.918 "is_configured": false, 00:18:31.918 "data_offset": 0, 00:18:31.918 "data_size": 7936 00:18:31.918 }, 00:18:31.918 { 00:18:31.918 "name": "BaseBdev2", 00:18:31.918 "uuid": "87866dc7-d31c-59a8-8447-39674bdbf6ea", 00:18:31.918 "is_configured": true, 00:18:31.918 "data_offset": 256, 00:18:31.918 "data_size": 7936 00:18:31.918 } 00:18:31.918 ] 00:18:31.918 }' 00:18:31.918 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.918 14:51:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.177 14:51:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:32.177 14:51:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.177 14:51:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.177 [2024-12-09 14:51:10.254847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:32.177 [2024-12-09 14:51:10.271344] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:32.177 14:51:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.177 14:51:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:32.177 [2024-12-09 14:51:10.273395] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:33.557 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:33.557 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:33.557 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:33.557 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:33.557 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:33.557 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.557 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.557 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.557 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.557 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.557 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:33.557 "name": "raid_bdev1", 00:18:33.557 "uuid": "ab6ad454-0abb-44dc-b5a4-01daaa4e0846", 00:18:33.557 "strip_size_kb": 0, 00:18:33.557 "state": "online", 00:18:33.557 "raid_level": "raid1", 00:18:33.557 "superblock": true, 00:18:33.557 "num_base_bdevs": 2, 00:18:33.557 "num_base_bdevs_discovered": 2, 00:18:33.557 "num_base_bdevs_operational": 2, 00:18:33.557 "process": { 00:18:33.557 "type": "rebuild", 00:18:33.557 "target": "spare", 00:18:33.557 "progress": { 00:18:33.557 "blocks": 2560, 00:18:33.557 "percent": 32 00:18:33.557 } 00:18:33.557 }, 00:18:33.557 "base_bdevs_list": [ 00:18:33.557 { 00:18:33.557 "name": "spare", 00:18:33.557 "uuid": "0a047e52-1f8f-590d-86d5-95ba8284f584", 00:18:33.557 "is_configured": true, 00:18:33.557 "data_offset": 256, 00:18:33.557 "data_size": 7936 00:18:33.557 }, 00:18:33.557 { 00:18:33.557 "name": "BaseBdev2", 00:18:33.557 "uuid": "87866dc7-d31c-59a8-8447-39674bdbf6ea", 00:18:33.557 "is_configured": true, 00:18:33.557 "data_offset": 256, 00:18:33.557 "data_size": 7936 00:18:33.557 } 00:18:33.557 ] 00:18:33.557 }' 00:18:33.557 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:33.557 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:33.557 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:33.557 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:33.557 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:33.557 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.557 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.557 [2024-12-09 14:51:11.428741] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:33.557 [2024-12-09 14:51:11.478946] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:33.557 [2024-12-09 14:51:11.479024] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:33.557 [2024-12-09 14:51:11.479039] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:33.557 [2024-12-09 14:51:11.479048] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:33.557 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.557 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:33.557 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:33.557 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:33.557 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:33.557 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:33.557 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:33.557 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.557 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.557 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.557 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.557 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.557 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.557 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.557 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.557 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.557 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.557 "name": "raid_bdev1", 00:18:33.557 "uuid": "ab6ad454-0abb-44dc-b5a4-01daaa4e0846", 00:18:33.557 "strip_size_kb": 0, 00:18:33.557 "state": "online", 00:18:33.557 "raid_level": "raid1", 00:18:33.557 "superblock": true, 00:18:33.557 "num_base_bdevs": 2, 00:18:33.557 "num_base_bdevs_discovered": 1, 00:18:33.557 "num_base_bdevs_operational": 1, 00:18:33.557 "base_bdevs_list": [ 00:18:33.557 { 00:18:33.557 "name": null, 00:18:33.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.557 "is_configured": false, 00:18:33.557 "data_offset": 0, 00:18:33.557 "data_size": 7936 00:18:33.557 }, 00:18:33.557 { 00:18:33.557 "name": "BaseBdev2", 00:18:33.557 "uuid": "87866dc7-d31c-59a8-8447-39674bdbf6ea", 00:18:33.557 "is_configured": true, 00:18:33.557 "data_offset": 256, 00:18:33.557 "data_size": 7936 00:18:33.557 } 00:18:33.557 ] 00:18:33.557 }' 00:18:33.557 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.557 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.126 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:34.126 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:34.126 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:34.126 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:34.126 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:34.126 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.126 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.126 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.126 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.126 14:51:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.126 14:51:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:34.126 "name": "raid_bdev1", 00:18:34.126 "uuid": "ab6ad454-0abb-44dc-b5a4-01daaa4e0846", 00:18:34.127 "strip_size_kb": 0, 00:18:34.127 "state": "online", 00:18:34.127 "raid_level": "raid1", 00:18:34.127 "superblock": true, 00:18:34.127 "num_base_bdevs": 2, 00:18:34.127 "num_base_bdevs_discovered": 1, 00:18:34.127 "num_base_bdevs_operational": 1, 00:18:34.127 "base_bdevs_list": [ 00:18:34.127 { 00:18:34.127 "name": null, 00:18:34.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.127 "is_configured": false, 00:18:34.127 "data_offset": 0, 00:18:34.127 "data_size": 7936 00:18:34.127 }, 00:18:34.127 { 00:18:34.127 "name": "BaseBdev2", 00:18:34.127 "uuid": "87866dc7-d31c-59a8-8447-39674bdbf6ea", 00:18:34.127 "is_configured": true, 00:18:34.127 "data_offset": 256, 00:18:34.127 "data_size": 7936 00:18:34.127 } 00:18:34.127 ] 00:18:34.127 }' 00:18:34.127 14:51:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:34.127 14:51:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:34.127 14:51:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:34.127 14:51:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:34.127 14:51:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:34.127 14:51:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.127 14:51:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.127 [2024-12-09 14:51:12.095295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:34.127 [2024-12-09 14:51:12.111891] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:34.127 14:51:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.127 14:51:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:34.127 [2024-12-09 14:51:12.113814] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:35.065 14:51:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:35.065 14:51:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:35.065 14:51:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:35.065 14:51:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:35.065 14:51:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:35.065 14:51:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.065 14:51:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.065 14:51:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.065 14:51:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.065 14:51:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.065 14:51:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:35.065 "name": "raid_bdev1", 00:18:35.065 "uuid": "ab6ad454-0abb-44dc-b5a4-01daaa4e0846", 00:18:35.065 "strip_size_kb": 0, 00:18:35.065 "state": "online", 00:18:35.065 "raid_level": "raid1", 00:18:35.065 "superblock": true, 00:18:35.065 "num_base_bdevs": 2, 00:18:35.065 "num_base_bdevs_discovered": 2, 00:18:35.065 "num_base_bdevs_operational": 2, 00:18:35.065 "process": { 00:18:35.065 "type": "rebuild", 00:18:35.065 "target": "spare", 00:18:35.065 "progress": { 00:18:35.065 "blocks": 2560, 00:18:35.065 "percent": 32 00:18:35.065 } 00:18:35.065 }, 00:18:35.065 "base_bdevs_list": [ 00:18:35.065 { 00:18:35.065 "name": "spare", 00:18:35.065 "uuid": "0a047e52-1f8f-590d-86d5-95ba8284f584", 00:18:35.065 "is_configured": true, 00:18:35.065 "data_offset": 256, 00:18:35.065 "data_size": 7936 00:18:35.065 }, 00:18:35.065 { 00:18:35.065 "name": "BaseBdev2", 00:18:35.065 "uuid": "87866dc7-d31c-59a8-8447-39674bdbf6ea", 00:18:35.065 "is_configured": true, 00:18:35.065 "data_offset": 256, 00:18:35.065 "data_size": 7936 00:18:35.065 } 00:18:35.065 ] 00:18:35.065 }' 00:18:35.065 14:51:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:35.324 14:51:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:35.324 14:51:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:35.324 14:51:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:35.324 14:51:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:35.324 14:51:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:35.324 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:35.324 14:51:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:35.324 14:51:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:35.324 14:51:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:35.324 14:51:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=745 00:18:35.324 14:51:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:35.324 14:51:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:35.324 14:51:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:35.324 14:51:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:35.324 14:51:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:35.324 14:51:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:35.324 14:51:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.324 14:51:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.324 14:51:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.325 14:51:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.325 14:51:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.325 14:51:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:35.325 "name": "raid_bdev1", 00:18:35.325 "uuid": "ab6ad454-0abb-44dc-b5a4-01daaa4e0846", 00:18:35.325 "strip_size_kb": 0, 00:18:35.325 "state": "online", 00:18:35.325 "raid_level": "raid1", 00:18:35.325 "superblock": true, 00:18:35.325 "num_base_bdevs": 2, 00:18:35.325 "num_base_bdevs_discovered": 2, 00:18:35.325 "num_base_bdevs_operational": 2, 00:18:35.325 "process": { 00:18:35.325 "type": "rebuild", 00:18:35.325 "target": "spare", 00:18:35.325 "progress": { 00:18:35.325 "blocks": 2816, 00:18:35.325 "percent": 35 00:18:35.325 } 00:18:35.325 }, 00:18:35.325 "base_bdevs_list": [ 00:18:35.325 { 00:18:35.325 "name": "spare", 00:18:35.325 "uuid": "0a047e52-1f8f-590d-86d5-95ba8284f584", 00:18:35.325 "is_configured": true, 00:18:35.325 "data_offset": 256, 00:18:35.325 "data_size": 7936 00:18:35.325 }, 00:18:35.325 { 00:18:35.325 "name": "BaseBdev2", 00:18:35.325 "uuid": "87866dc7-d31c-59a8-8447-39674bdbf6ea", 00:18:35.325 "is_configured": true, 00:18:35.325 "data_offset": 256, 00:18:35.325 "data_size": 7936 00:18:35.325 } 00:18:35.325 ] 00:18:35.325 }' 00:18:35.325 14:51:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:35.325 14:51:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:35.325 14:51:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:35.325 14:51:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:35.325 14:51:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:36.705 14:51:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:36.705 14:51:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:36.705 14:51:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:36.705 14:51:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:36.705 14:51:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:36.705 14:51:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:36.705 14:51:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.705 14:51:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.705 14:51:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.705 14:51:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.705 14:51:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.705 14:51:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:36.705 "name": "raid_bdev1", 00:18:36.705 "uuid": "ab6ad454-0abb-44dc-b5a4-01daaa4e0846", 00:18:36.705 "strip_size_kb": 0, 00:18:36.705 "state": "online", 00:18:36.705 "raid_level": "raid1", 00:18:36.705 "superblock": true, 00:18:36.705 "num_base_bdevs": 2, 00:18:36.705 "num_base_bdevs_discovered": 2, 00:18:36.705 "num_base_bdevs_operational": 2, 00:18:36.705 "process": { 00:18:36.705 "type": "rebuild", 00:18:36.705 "target": "spare", 00:18:36.705 "progress": { 00:18:36.705 "blocks": 5888, 00:18:36.705 "percent": 74 00:18:36.705 } 00:18:36.705 }, 00:18:36.705 "base_bdevs_list": [ 00:18:36.705 { 00:18:36.705 "name": "spare", 00:18:36.705 "uuid": "0a047e52-1f8f-590d-86d5-95ba8284f584", 00:18:36.705 "is_configured": true, 00:18:36.705 "data_offset": 256, 00:18:36.705 "data_size": 7936 00:18:36.705 }, 00:18:36.705 { 00:18:36.705 "name": "BaseBdev2", 00:18:36.705 "uuid": "87866dc7-d31c-59a8-8447-39674bdbf6ea", 00:18:36.706 "is_configured": true, 00:18:36.706 "data_offset": 256, 00:18:36.706 "data_size": 7936 00:18:36.706 } 00:18:36.706 ] 00:18:36.706 }' 00:18:36.706 14:51:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:36.706 14:51:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:36.706 14:51:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:36.706 14:51:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:36.706 14:51:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:37.274 [2024-12-09 14:51:15.228103] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:37.274 [2024-12-09 14:51:15.228190] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:37.274 [2024-12-09 14:51:15.228326] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.533 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:37.533 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:37.533 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:37.533 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:37.533 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:37.533 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:37.533 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.533 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.533 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.533 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.533 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.533 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:37.533 "name": "raid_bdev1", 00:18:37.533 "uuid": "ab6ad454-0abb-44dc-b5a4-01daaa4e0846", 00:18:37.533 "strip_size_kb": 0, 00:18:37.533 "state": "online", 00:18:37.533 "raid_level": "raid1", 00:18:37.533 "superblock": true, 00:18:37.533 "num_base_bdevs": 2, 00:18:37.533 "num_base_bdevs_discovered": 2, 00:18:37.533 "num_base_bdevs_operational": 2, 00:18:37.533 "base_bdevs_list": [ 00:18:37.533 { 00:18:37.533 "name": "spare", 00:18:37.533 "uuid": "0a047e52-1f8f-590d-86d5-95ba8284f584", 00:18:37.533 "is_configured": true, 00:18:37.533 "data_offset": 256, 00:18:37.533 "data_size": 7936 00:18:37.533 }, 00:18:37.533 { 00:18:37.533 "name": "BaseBdev2", 00:18:37.533 "uuid": "87866dc7-d31c-59a8-8447-39674bdbf6ea", 00:18:37.533 "is_configured": true, 00:18:37.533 "data_offset": 256, 00:18:37.533 "data_size": 7936 00:18:37.533 } 00:18:37.533 ] 00:18:37.533 }' 00:18:37.533 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:37.792 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:37.793 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:37.793 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:37.793 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:18:37.793 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:37.793 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:37.793 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:37.793 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:37.793 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:37.793 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.793 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.793 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.793 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.793 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.793 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:37.793 "name": "raid_bdev1", 00:18:37.793 "uuid": "ab6ad454-0abb-44dc-b5a4-01daaa4e0846", 00:18:37.793 "strip_size_kb": 0, 00:18:37.793 "state": "online", 00:18:37.793 "raid_level": "raid1", 00:18:37.793 "superblock": true, 00:18:37.793 "num_base_bdevs": 2, 00:18:37.793 "num_base_bdevs_discovered": 2, 00:18:37.793 "num_base_bdevs_operational": 2, 00:18:37.793 "base_bdevs_list": [ 00:18:37.793 { 00:18:37.793 "name": "spare", 00:18:37.793 "uuid": "0a047e52-1f8f-590d-86d5-95ba8284f584", 00:18:37.793 "is_configured": true, 00:18:37.793 "data_offset": 256, 00:18:37.793 "data_size": 7936 00:18:37.793 }, 00:18:37.793 { 00:18:37.793 "name": "BaseBdev2", 00:18:37.793 "uuid": "87866dc7-d31c-59a8-8447-39674bdbf6ea", 00:18:37.793 "is_configured": true, 00:18:37.793 "data_offset": 256, 00:18:37.793 "data_size": 7936 00:18:37.793 } 00:18:37.793 ] 00:18:37.793 }' 00:18:37.793 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:37.793 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:37.793 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:37.793 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:37.793 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:37.793 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:37.793 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:37.793 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:37.793 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:37.793 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:37.793 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.793 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.793 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.793 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.793 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.793 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.793 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.793 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.793 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.052 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:38.052 "name": "raid_bdev1", 00:18:38.052 "uuid": "ab6ad454-0abb-44dc-b5a4-01daaa4e0846", 00:18:38.052 "strip_size_kb": 0, 00:18:38.052 "state": "online", 00:18:38.052 "raid_level": "raid1", 00:18:38.052 "superblock": true, 00:18:38.052 "num_base_bdevs": 2, 00:18:38.052 "num_base_bdevs_discovered": 2, 00:18:38.052 "num_base_bdevs_operational": 2, 00:18:38.052 "base_bdevs_list": [ 00:18:38.052 { 00:18:38.052 "name": "spare", 00:18:38.052 "uuid": "0a047e52-1f8f-590d-86d5-95ba8284f584", 00:18:38.052 "is_configured": true, 00:18:38.052 "data_offset": 256, 00:18:38.052 "data_size": 7936 00:18:38.052 }, 00:18:38.052 { 00:18:38.052 "name": "BaseBdev2", 00:18:38.052 "uuid": "87866dc7-d31c-59a8-8447-39674bdbf6ea", 00:18:38.052 "is_configured": true, 00:18:38.052 "data_offset": 256, 00:18:38.052 "data_size": 7936 00:18:38.052 } 00:18:38.052 ] 00:18:38.052 }' 00:18:38.052 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:38.052 14:51:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.312 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:38.312 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.312 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.312 [2024-12-09 14:51:16.295202] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:38.312 [2024-12-09 14:51:16.295331] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:38.312 [2024-12-09 14:51:16.295475] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:38.312 [2024-12-09 14:51:16.295631] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:38.312 [2024-12-09 14:51:16.295696] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:38.312 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.312 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:18:38.312 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.312 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.312 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.312 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.313 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:38.313 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:18:38.313 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:38.313 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:38.313 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.313 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.313 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.313 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:38.313 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.313 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.313 [2024-12-09 14:51:16.367037] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:38.313 [2024-12-09 14:51:16.367147] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:38.313 [2024-12-09 14:51:16.367188] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:38.313 [2024-12-09 14:51:16.367238] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:38.313 [2024-12-09 14:51:16.369348] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:38.313 [2024-12-09 14:51:16.369383] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:38.313 [2024-12-09 14:51:16.369442] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:38.313 [2024-12-09 14:51:16.369491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:38.313 [2024-12-09 14:51:16.369627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:38.313 spare 00:18:38.313 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.313 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:38.313 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.313 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.573 [2024-12-09 14:51:16.469529] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:38.573 [2024-12-09 14:51:16.469563] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:38.573 [2024-12-09 14:51:16.469701] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:38.573 [2024-12-09 14:51:16.469826] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:38.573 [2024-12-09 14:51:16.469844] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:38.573 [2024-12-09 14:51:16.469954] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:38.573 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.573 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:38.573 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:38.573 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:38.573 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:38.573 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:38.573 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:38.573 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:38.573 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:38.573 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:38.573 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:38.573 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.573 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.573 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.573 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.573 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.573 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:38.573 "name": "raid_bdev1", 00:18:38.573 "uuid": "ab6ad454-0abb-44dc-b5a4-01daaa4e0846", 00:18:38.573 "strip_size_kb": 0, 00:18:38.573 "state": "online", 00:18:38.573 "raid_level": "raid1", 00:18:38.573 "superblock": true, 00:18:38.573 "num_base_bdevs": 2, 00:18:38.573 "num_base_bdevs_discovered": 2, 00:18:38.573 "num_base_bdevs_operational": 2, 00:18:38.573 "base_bdevs_list": [ 00:18:38.573 { 00:18:38.573 "name": "spare", 00:18:38.573 "uuid": "0a047e52-1f8f-590d-86d5-95ba8284f584", 00:18:38.573 "is_configured": true, 00:18:38.573 "data_offset": 256, 00:18:38.573 "data_size": 7936 00:18:38.573 }, 00:18:38.573 { 00:18:38.573 "name": "BaseBdev2", 00:18:38.573 "uuid": "87866dc7-d31c-59a8-8447-39674bdbf6ea", 00:18:38.573 "is_configured": true, 00:18:38.573 "data_offset": 256, 00:18:38.573 "data_size": 7936 00:18:38.573 } 00:18:38.573 ] 00:18:38.573 }' 00:18:38.573 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:38.573 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.833 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:38.833 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:38.833 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:38.833 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:38.833 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:38.833 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.833 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.833 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.833 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.095 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.095 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:39.095 "name": "raid_bdev1", 00:18:39.095 "uuid": "ab6ad454-0abb-44dc-b5a4-01daaa4e0846", 00:18:39.095 "strip_size_kb": 0, 00:18:39.095 "state": "online", 00:18:39.095 "raid_level": "raid1", 00:18:39.095 "superblock": true, 00:18:39.095 "num_base_bdevs": 2, 00:18:39.095 "num_base_bdevs_discovered": 2, 00:18:39.095 "num_base_bdevs_operational": 2, 00:18:39.095 "base_bdevs_list": [ 00:18:39.095 { 00:18:39.095 "name": "spare", 00:18:39.095 "uuid": "0a047e52-1f8f-590d-86d5-95ba8284f584", 00:18:39.095 "is_configured": true, 00:18:39.095 "data_offset": 256, 00:18:39.095 "data_size": 7936 00:18:39.095 }, 00:18:39.095 { 00:18:39.095 "name": "BaseBdev2", 00:18:39.095 "uuid": "87866dc7-d31c-59a8-8447-39674bdbf6ea", 00:18:39.095 "is_configured": true, 00:18:39.095 "data_offset": 256, 00:18:39.095 "data_size": 7936 00:18:39.095 } 00:18:39.095 ] 00:18:39.095 }' 00:18:39.095 14:51:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:39.095 14:51:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:39.095 14:51:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.095 14:51:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:39.095 14:51:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.095 14:51:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:39.095 14:51:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.095 14:51:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.095 14:51:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.095 14:51:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:39.095 14:51:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:39.095 14:51:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.095 14:51:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.095 [2024-12-09 14:51:17.125898] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:39.095 14:51:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.095 14:51:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:39.095 14:51:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.095 14:51:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:39.095 14:51:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:39.095 14:51:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:39.095 14:51:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:39.095 14:51:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.095 14:51:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.095 14:51:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.095 14:51:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.095 14:51:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.095 14:51:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.095 14:51:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.095 14:51:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.095 14:51:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.095 14:51:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.095 "name": "raid_bdev1", 00:18:39.095 "uuid": "ab6ad454-0abb-44dc-b5a4-01daaa4e0846", 00:18:39.095 "strip_size_kb": 0, 00:18:39.095 "state": "online", 00:18:39.095 "raid_level": "raid1", 00:18:39.095 "superblock": true, 00:18:39.095 "num_base_bdevs": 2, 00:18:39.095 "num_base_bdevs_discovered": 1, 00:18:39.095 "num_base_bdevs_operational": 1, 00:18:39.095 "base_bdevs_list": [ 00:18:39.095 { 00:18:39.095 "name": null, 00:18:39.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.095 "is_configured": false, 00:18:39.095 "data_offset": 0, 00:18:39.095 "data_size": 7936 00:18:39.095 }, 00:18:39.095 { 00:18:39.095 "name": "BaseBdev2", 00:18:39.095 "uuid": "87866dc7-d31c-59a8-8447-39674bdbf6ea", 00:18:39.095 "is_configured": true, 00:18:39.095 "data_offset": 256, 00:18:39.095 "data_size": 7936 00:18:39.095 } 00:18:39.095 ] 00:18:39.095 }' 00:18:39.095 14:51:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.095 14:51:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.664 14:51:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:39.664 14:51:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.664 14:51:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.664 [2024-12-09 14:51:17.553196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:39.664 [2024-12-09 14:51:17.553459] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:39.664 [2024-12-09 14:51:17.553529] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:39.664 [2024-12-09 14:51:17.553617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:39.664 [2024-12-09 14:51:17.570537] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:39.664 14:51:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.664 14:51:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:39.664 [2024-12-09 14:51:17.572669] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:40.603 14:51:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:40.603 14:51:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:40.603 14:51:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:40.603 14:51:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:40.603 14:51:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:40.603 14:51:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.603 14:51:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.603 14:51:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.603 14:51:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.603 14:51:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.603 14:51:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:40.603 "name": "raid_bdev1", 00:18:40.603 "uuid": "ab6ad454-0abb-44dc-b5a4-01daaa4e0846", 00:18:40.603 "strip_size_kb": 0, 00:18:40.603 "state": "online", 00:18:40.603 "raid_level": "raid1", 00:18:40.603 "superblock": true, 00:18:40.603 "num_base_bdevs": 2, 00:18:40.603 "num_base_bdevs_discovered": 2, 00:18:40.603 "num_base_bdevs_operational": 2, 00:18:40.603 "process": { 00:18:40.603 "type": "rebuild", 00:18:40.603 "target": "spare", 00:18:40.603 "progress": { 00:18:40.603 "blocks": 2560, 00:18:40.603 "percent": 32 00:18:40.603 } 00:18:40.603 }, 00:18:40.603 "base_bdevs_list": [ 00:18:40.603 { 00:18:40.603 "name": "spare", 00:18:40.603 "uuid": "0a047e52-1f8f-590d-86d5-95ba8284f584", 00:18:40.603 "is_configured": true, 00:18:40.603 "data_offset": 256, 00:18:40.603 "data_size": 7936 00:18:40.603 }, 00:18:40.603 { 00:18:40.603 "name": "BaseBdev2", 00:18:40.603 "uuid": "87866dc7-d31c-59a8-8447-39674bdbf6ea", 00:18:40.603 "is_configured": true, 00:18:40.603 "data_offset": 256, 00:18:40.603 "data_size": 7936 00:18:40.603 } 00:18:40.603 ] 00:18:40.603 }' 00:18:40.603 14:51:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:40.603 14:51:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:40.603 14:51:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:40.603 14:51:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:40.603 14:51:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:40.604 14:51:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.604 14:51:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.604 [2024-12-09 14:51:18.712050] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:40.863 [2024-12-09 14:51:18.778253] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:40.863 [2024-12-09 14:51:18.778344] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:40.863 [2024-12-09 14:51:18.778360] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:40.863 [2024-12-09 14:51:18.778369] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:40.863 14:51:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.863 14:51:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:40.863 14:51:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:40.863 14:51:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:40.863 14:51:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:40.863 14:51:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:40.863 14:51:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:40.863 14:51:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.863 14:51:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.863 14:51:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.863 14:51:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.863 14:51:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.863 14:51:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.863 14:51:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.863 14:51:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.863 14:51:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.863 14:51:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.863 "name": "raid_bdev1", 00:18:40.863 "uuid": "ab6ad454-0abb-44dc-b5a4-01daaa4e0846", 00:18:40.863 "strip_size_kb": 0, 00:18:40.863 "state": "online", 00:18:40.863 "raid_level": "raid1", 00:18:40.863 "superblock": true, 00:18:40.863 "num_base_bdevs": 2, 00:18:40.863 "num_base_bdevs_discovered": 1, 00:18:40.863 "num_base_bdevs_operational": 1, 00:18:40.863 "base_bdevs_list": [ 00:18:40.863 { 00:18:40.863 "name": null, 00:18:40.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.863 "is_configured": false, 00:18:40.863 "data_offset": 0, 00:18:40.863 "data_size": 7936 00:18:40.863 }, 00:18:40.863 { 00:18:40.863 "name": "BaseBdev2", 00:18:40.863 "uuid": "87866dc7-d31c-59a8-8447-39674bdbf6ea", 00:18:40.863 "is_configured": true, 00:18:40.863 "data_offset": 256, 00:18:40.863 "data_size": 7936 00:18:40.863 } 00:18:40.863 ] 00:18:40.863 }' 00:18:40.863 14:51:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.863 14:51:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.123 14:51:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:41.123 14:51:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.123 14:51:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.123 [2024-12-09 14:51:19.237240] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:41.123 [2024-12-09 14:51:19.237367] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.123 [2024-12-09 14:51:19.237442] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:41.123 [2024-12-09 14:51:19.237480] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.123 [2024-12-09 14:51:19.237730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.123 [2024-12-09 14:51:19.237786] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:41.123 [2024-12-09 14:51:19.237878] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:41.123 [2024-12-09 14:51:19.237920] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:41.123 [2024-12-09 14:51:19.237965] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:41.123 [2024-12-09 14:51:19.238014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:41.383 [2024-12-09 14:51:19.254380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:41.383 spare 00:18:41.383 14:51:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.383 [2024-12-09 14:51:19.256418] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:41.383 14:51:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:42.344 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:42.344 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.344 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:42.344 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:42.344 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.344 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.344 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.344 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.344 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.344 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.344 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.344 "name": "raid_bdev1", 00:18:42.344 "uuid": "ab6ad454-0abb-44dc-b5a4-01daaa4e0846", 00:18:42.344 "strip_size_kb": 0, 00:18:42.344 "state": "online", 00:18:42.344 "raid_level": "raid1", 00:18:42.344 "superblock": true, 00:18:42.344 "num_base_bdevs": 2, 00:18:42.344 "num_base_bdevs_discovered": 2, 00:18:42.344 "num_base_bdevs_operational": 2, 00:18:42.344 "process": { 00:18:42.344 "type": "rebuild", 00:18:42.344 "target": "spare", 00:18:42.344 "progress": { 00:18:42.344 "blocks": 2560, 00:18:42.344 "percent": 32 00:18:42.344 } 00:18:42.344 }, 00:18:42.344 "base_bdevs_list": [ 00:18:42.344 { 00:18:42.344 "name": "spare", 00:18:42.344 "uuid": "0a047e52-1f8f-590d-86d5-95ba8284f584", 00:18:42.344 "is_configured": true, 00:18:42.344 "data_offset": 256, 00:18:42.344 "data_size": 7936 00:18:42.344 }, 00:18:42.344 { 00:18:42.344 "name": "BaseBdev2", 00:18:42.344 "uuid": "87866dc7-d31c-59a8-8447-39674bdbf6ea", 00:18:42.344 "is_configured": true, 00:18:42.344 "data_offset": 256, 00:18:42.344 "data_size": 7936 00:18:42.344 } 00:18:42.344 ] 00:18:42.344 }' 00:18:42.344 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:42.344 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:42.344 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:42.344 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:42.344 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:42.344 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.344 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.344 [2024-12-09 14:51:20.411885] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:42.344 [2024-12-09 14:51:20.462109] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:42.344 [2024-12-09 14:51:20.462167] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:42.344 [2024-12-09 14:51:20.462201] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:42.344 [2024-12-09 14:51:20.462208] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:42.604 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.604 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:42.604 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:42.604 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:42.604 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:42.604 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:42.604 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:42.604 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.604 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.604 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.604 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.604 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.604 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.604 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.604 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.604 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.604 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.604 "name": "raid_bdev1", 00:18:42.604 "uuid": "ab6ad454-0abb-44dc-b5a4-01daaa4e0846", 00:18:42.604 "strip_size_kb": 0, 00:18:42.604 "state": "online", 00:18:42.604 "raid_level": "raid1", 00:18:42.604 "superblock": true, 00:18:42.604 "num_base_bdevs": 2, 00:18:42.604 "num_base_bdevs_discovered": 1, 00:18:42.604 "num_base_bdevs_operational": 1, 00:18:42.604 "base_bdevs_list": [ 00:18:42.604 { 00:18:42.604 "name": null, 00:18:42.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.604 "is_configured": false, 00:18:42.604 "data_offset": 0, 00:18:42.604 "data_size": 7936 00:18:42.604 }, 00:18:42.604 { 00:18:42.604 "name": "BaseBdev2", 00:18:42.604 "uuid": "87866dc7-d31c-59a8-8447-39674bdbf6ea", 00:18:42.604 "is_configured": true, 00:18:42.604 "data_offset": 256, 00:18:42.604 "data_size": 7936 00:18:42.604 } 00:18:42.604 ] 00:18:42.604 }' 00:18:42.604 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.604 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.864 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:42.864 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.864 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:42.864 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:42.864 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.864 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.864 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.864 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.864 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.864 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.864 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.864 "name": "raid_bdev1", 00:18:42.864 "uuid": "ab6ad454-0abb-44dc-b5a4-01daaa4e0846", 00:18:42.864 "strip_size_kb": 0, 00:18:42.864 "state": "online", 00:18:42.864 "raid_level": "raid1", 00:18:42.864 "superblock": true, 00:18:42.864 "num_base_bdevs": 2, 00:18:42.864 "num_base_bdevs_discovered": 1, 00:18:42.864 "num_base_bdevs_operational": 1, 00:18:42.864 "base_bdevs_list": [ 00:18:42.864 { 00:18:42.864 "name": null, 00:18:42.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.864 "is_configured": false, 00:18:42.864 "data_offset": 0, 00:18:42.864 "data_size": 7936 00:18:42.864 }, 00:18:42.864 { 00:18:42.864 "name": "BaseBdev2", 00:18:42.864 "uuid": "87866dc7-d31c-59a8-8447-39674bdbf6ea", 00:18:42.864 "is_configured": true, 00:18:42.864 "data_offset": 256, 00:18:42.864 "data_size": 7936 00:18:42.864 } 00:18:42.864 ] 00:18:42.864 }' 00:18:42.864 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:42.864 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:42.864 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.123 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:43.123 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:43.123 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.123 14:51:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.123 14:51:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.123 14:51:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:43.123 14:51:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.123 14:51:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.124 [2024-12-09 14:51:21.009237] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:43.124 [2024-12-09 14:51:21.009297] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.124 [2024-12-09 14:51:21.009319] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:43.124 [2024-12-09 14:51:21.009327] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.124 [2024-12-09 14:51:21.009516] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.124 [2024-12-09 14:51:21.009529] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:43.124 [2024-12-09 14:51:21.009595] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:43.124 [2024-12-09 14:51:21.009609] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:43.124 [2024-12-09 14:51:21.009619] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:43.124 [2024-12-09 14:51:21.009629] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:43.124 BaseBdev1 00:18:43.124 14:51:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.124 14:51:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:44.062 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:44.062 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:44.062 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:44.062 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:44.062 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:44.062 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:44.062 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.062 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.062 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.062 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.062 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.062 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.063 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.063 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.063 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.063 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.063 "name": "raid_bdev1", 00:18:44.063 "uuid": "ab6ad454-0abb-44dc-b5a4-01daaa4e0846", 00:18:44.063 "strip_size_kb": 0, 00:18:44.063 "state": "online", 00:18:44.063 "raid_level": "raid1", 00:18:44.063 "superblock": true, 00:18:44.063 "num_base_bdevs": 2, 00:18:44.063 "num_base_bdevs_discovered": 1, 00:18:44.063 "num_base_bdevs_operational": 1, 00:18:44.063 "base_bdevs_list": [ 00:18:44.063 { 00:18:44.063 "name": null, 00:18:44.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.063 "is_configured": false, 00:18:44.063 "data_offset": 0, 00:18:44.063 "data_size": 7936 00:18:44.063 }, 00:18:44.063 { 00:18:44.063 "name": "BaseBdev2", 00:18:44.063 "uuid": "87866dc7-d31c-59a8-8447-39674bdbf6ea", 00:18:44.063 "is_configured": true, 00:18:44.063 "data_offset": 256, 00:18:44.063 "data_size": 7936 00:18:44.063 } 00:18:44.063 ] 00:18:44.063 }' 00:18:44.063 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.063 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.322 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:44.322 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:44.322 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:44.322 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:44.322 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:44.322 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.322 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.322 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.322 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.582 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.583 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:44.583 "name": "raid_bdev1", 00:18:44.583 "uuid": "ab6ad454-0abb-44dc-b5a4-01daaa4e0846", 00:18:44.583 "strip_size_kb": 0, 00:18:44.583 "state": "online", 00:18:44.583 "raid_level": "raid1", 00:18:44.583 "superblock": true, 00:18:44.583 "num_base_bdevs": 2, 00:18:44.583 "num_base_bdevs_discovered": 1, 00:18:44.583 "num_base_bdevs_operational": 1, 00:18:44.583 "base_bdevs_list": [ 00:18:44.583 { 00:18:44.583 "name": null, 00:18:44.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.583 "is_configured": false, 00:18:44.583 "data_offset": 0, 00:18:44.583 "data_size": 7936 00:18:44.583 }, 00:18:44.583 { 00:18:44.583 "name": "BaseBdev2", 00:18:44.583 "uuid": "87866dc7-d31c-59a8-8447-39674bdbf6ea", 00:18:44.583 "is_configured": true, 00:18:44.583 "data_offset": 256, 00:18:44.583 "data_size": 7936 00:18:44.583 } 00:18:44.583 ] 00:18:44.583 }' 00:18:44.583 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:44.583 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:44.583 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:44.583 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:44.583 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:44.583 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:18:44.583 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:44.583 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:44.583 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:44.583 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:44.583 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:44.583 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:44.583 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.583 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.583 [2024-12-09 14:51:22.570639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:44.583 [2024-12-09 14:51:22.570845] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:44.583 [2024-12-09 14:51:22.570908] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:44.583 request: 00:18:44.583 { 00:18:44.583 "base_bdev": "BaseBdev1", 00:18:44.583 "raid_bdev": "raid_bdev1", 00:18:44.583 "method": "bdev_raid_add_base_bdev", 00:18:44.583 "req_id": 1 00:18:44.583 } 00:18:44.583 Got JSON-RPC error response 00:18:44.583 response: 00:18:44.583 { 00:18:44.583 "code": -22, 00:18:44.583 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:44.583 } 00:18:44.583 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:44.583 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:18:44.583 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:44.583 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:44.583 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:44.583 14:51:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:45.537 14:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:45.537 14:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:45.537 14:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:45.537 14:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:45.537 14:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:45.537 14:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:45.537 14:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.537 14:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.537 14:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.537 14:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.537 14:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.537 14:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.537 14:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.537 14:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.537 14:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.537 14:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.537 "name": "raid_bdev1", 00:18:45.537 "uuid": "ab6ad454-0abb-44dc-b5a4-01daaa4e0846", 00:18:45.537 "strip_size_kb": 0, 00:18:45.537 "state": "online", 00:18:45.537 "raid_level": "raid1", 00:18:45.537 "superblock": true, 00:18:45.537 "num_base_bdevs": 2, 00:18:45.537 "num_base_bdevs_discovered": 1, 00:18:45.537 "num_base_bdevs_operational": 1, 00:18:45.537 "base_bdevs_list": [ 00:18:45.537 { 00:18:45.537 "name": null, 00:18:45.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.537 "is_configured": false, 00:18:45.537 "data_offset": 0, 00:18:45.537 "data_size": 7936 00:18:45.537 }, 00:18:45.537 { 00:18:45.537 "name": "BaseBdev2", 00:18:45.537 "uuid": "87866dc7-d31c-59a8-8447-39674bdbf6ea", 00:18:45.537 "is_configured": true, 00:18:45.537 "data_offset": 256, 00:18:45.537 "data_size": 7936 00:18:45.537 } 00:18:45.537 ] 00:18:45.537 }' 00:18:45.537 14:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.537 14:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.106 14:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:46.106 14:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:46.106 14:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:46.106 14:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:46.106 14:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:46.107 14:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.107 14:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.107 14:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.107 14:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.107 14:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.107 14:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:46.107 "name": "raid_bdev1", 00:18:46.107 "uuid": "ab6ad454-0abb-44dc-b5a4-01daaa4e0846", 00:18:46.107 "strip_size_kb": 0, 00:18:46.107 "state": "online", 00:18:46.107 "raid_level": "raid1", 00:18:46.107 "superblock": true, 00:18:46.107 "num_base_bdevs": 2, 00:18:46.107 "num_base_bdevs_discovered": 1, 00:18:46.107 "num_base_bdevs_operational": 1, 00:18:46.107 "base_bdevs_list": [ 00:18:46.107 { 00:18:46.107 "name": null, 00:18:46.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.107 "is_configured": false, 00:18:46.107 "data_offset": 0, 00:18:46.107 "data_size": 7936 00:18:46.107 }, 00:18:46.107 { 00:18:46.107 "name": "BaseBdev2", 00:18:46.107 "uuid": "87866dc7-d31c-59a8-8447-39674bdbf6ea", 00:18:46.107 "is_configured": true, 00:18:46.107 "data_offset": 256, 00:18:46.107 "data_size": 7936 00:18:46.107 } 00:18:46.107 ] 00:18:46.107 }' 00:18:46.107 14:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:46.107 14:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:46.107 14:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:46.107 14:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:46.107 14:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 90356 00:18:46.107 14:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 90356 ']' 00:18:46.107 14:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 90356 00:18:46.107 14:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:46.107 14:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:46.107 14:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90356 00:18:46.107 14:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:46.107 killing process with pid 90356 00:18:46.107 Received shutdown signal, test time was about 60.000000 seconds 00:18:46.107 00:18:46.107 Latency(us) 00:18:46.107 [2024-12-09T14:51:24.229Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.107 [2024-12-09T14:51:24.229Z] =================================================================================================================== 00:18:46.107 [2024-12-09T14:51:24.229Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:46.107 14:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:46.107 14:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90356' 00:18:46.107 14:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 90356 00:18:46.107 [2024-12-09 14:51:24.168487] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:46.107 [2024-12-09 14:51:24.168633] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:46.107 14:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 90356 00:18:46.107 [2024-12-09 14:51:24.168688] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:46.107 [2024-12-09 14:51:24.168701] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:46.367 [2024-12-09 14:51:24.457980] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:47.745 14:51:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:18:47.745 00:18:47.745 real 0m17.355s 00:18:47.745 user 0m22.739s 00:18:47.745 sys 0m1.571s 00:18:47.745 14:51:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:47.745 ************************************ 00:18:47.745 END TEST raid_rebuild_test_sb_md_interleaved 00:18:47.745 ************************************ 00:18:47.745 14:51:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.745 14:51:25 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:18:47.745 14:51:25 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:18:47.745 14:51:25 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 90356 ']' 00:18:47.745 14:51:25 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 90356 00:18:47.745 14:51:25 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:18:47.745 ************************************ 00:18:47.745 END TEST bdev_raid 00:18:47.745 00:18:47.745 real 12m6.858s 00:18:47.745 user 16m24.716s 00:18:47.745 sys 1m52.569s 00:18:47.745 14:51:25 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:47.745 14:51:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:47.745 ************************************ 00:18:47.745 14:51:25 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:47.745 14:51:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:47.745 14:51:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:47.745 14:51:25 -- common/autotest_common.sh@10 -- # set +x 00:18:47.745 ************************************ 00:18:47.745 START TEST spdkcli_raid 00:18:47.745 ************************************ 00:18:47.745 14:51:25 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:47.745 * Looking for test storage... 00:18:47.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:47.745 14:51:25 spdkcli_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:47.745 14:51:25 spdkcli_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:18:47.745 14:51:25 spdkcli_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:48.005 14:51:25 spdkcli_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:48.005 14:51:25 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:48.005 14:51:25 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:48.005 14:51:25 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:48.005 14:51:25 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:18:48.005 14:51:25 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:18:48.005 14:51:25 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:18:48.005 14:51:25 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:18:48.005 14:51:25 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:18:48.005 14:51:25 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:18:48.005 14:51:25 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:18:48.005 14:51:25 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:48.005 14:51:25 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:18:48.006 14:51:25 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:18:48.006 14:51:25 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:48.006 14:51:25 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:48.006 14:51:25 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:18:48.006 14:51:25 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:18:48.006 14:51:25 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:48.006 14:51:25 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:18:48.006 14:51:25 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:48.006 14:51:25 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:18:48.006 14:51:25 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:18:48.006 14:51:25 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:48.006 14:51:25 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:18:48.006 14:51:25 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:48.006 14:51:25 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:48.006 14:51:25 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:48.006 14:51:25 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:18:48.006 14:51:25 spdkcli_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:48.006 14:51:25 spdkcli_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:48.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.006 --rc genhtml_branch_coverage=1 00:18:48.006 --rc genhtml_function_coverage=1 00:18:48.006 --rc genhtml_legend=1 00:18:48.006 --rc geninfo_all_blocks=1 00:18:48.006 --rc geninfo_unexecuted_blocks=1 00:18:48.006 00:18:48.006 ' 00:18:48.006 14:51:25 spdkcli_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:48.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.006 --rc genhtml_branch_coverage=1 00:18:48.006 --rc genhtml_function_coverage=1 00:18:48.006 --rc genhtml_legend=1 00:18:48.006 --rc geninfo_all_blocks=1 00:18:48.006 --rc geninfo_unexecuted_blocks=1 00:18:48.006 00:18:48.006 ' 00:18:48.006 14:51:25 spdkcli_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:48.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.006 --rc genhtml_branch_coverage=1 00:18:48.006 --rc genhtml_function_coverage=1 00:18:48.006 --rc genhtml_legend=1 00:18:48.006 --rc geninfo_all_blocks=1 00:18:48.006 --rc geninfo_unexecuted_blocks=1 00:18:48.006 00:18:48.006 ' 00:18:48.006 14:51:25 spdkcli_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:48.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.006 --rc genhtml_branch_coverage=1 00:18:48.006 --rc genhtml_function_coverage=1 00:18:48.006 --rc genhtml_legend=1 00:18:48.006 --rc geninfo_all_blocks=1 00:18:48.006 --rc geninfo_unexecuted_blocks=1 00:18:48.006 00:18:48.006 ' 00:18:48.006 14:51:25 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:48.006 14:51:25 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:48.006 14:51:25 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:48.006 14:51:25 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:18:48.006 14:51:25 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:18:48.006 14:51:25 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:18:48.006 14:51:25 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:18:48.006 14:51:25 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:18:48.006 14:51:25 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:18:48.006 14:51:25 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:18:48.006 14:51:25 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:18:48.006 14:51:25 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:18:48.006 14:51:25 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:18:48.006 14:51:25 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:18:48.006 14:51:25 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:18:48.006 14:51:25 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:18:48.006 14:51:25 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:18:48.006 14:51:25 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:18:48.006 14:51:25 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:18:48.006 14:51:25 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:18:48.006 14:51:25 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:18:48.006 14:51:25 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:18:48.006 14:51:25 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:18:48.006 14:51:25 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:18:48.006 14:51:25 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:18:48.006 14:51:25 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:48.006 14:51:25 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:48.006 14:51:25 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:48.006 14:51:25 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:48.006 14:51:25 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:48.006 14:51:25 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:48.006 14:51:25 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:18:48.006 14:51:25 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:18:48.006 14:51:25 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:48.006 14:51:25 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:48.006 14:51:25 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:18:48.006 14:51:25 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=91034 00:18:48.006 14:51:25 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:18:48.006 14:51:25 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 91034 00:18:48.006 14:51:25 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 91034 ']' 00:18:48.006 14:51:25 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.006 14:51:25 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:48.006 14:51:25 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.006 14:51:25 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:48.006 14:51:25 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:48.006 [2024-12-09 14:51:26.053517] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:18:48.006 [2024-12-09 14:51:26.053757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91034 ] 00:18:48.266 [2024-12-09 14:51:26.229504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:48.266 [2024-12-09 14:51:26.341330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.266 [2024-12-09 14:51:26.341367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.202 14:51:27 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:49.202 14:51:27 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:18:49.202 14:51:27 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:18:49.202 14:51:27 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:49.202 14:51:27 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:49.202 14:51:27 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:18:49.202 14:51:27 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:49.202 14:51:27 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:49.202 14:51:27 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:18:49.202 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:18:49.202 ' 00:18:51.110 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:18:51.110 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:18:51.110 14:51:28 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:18:51.110 14:51:28 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:51.110 14:51:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:51.110 14:51:28 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:18:51.110 14:51:28 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:51.110 14:51:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:51.110 14:51:28 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:18:51.110 ' 00:18:52.050 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:18:52.050 14:51:30 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:18:52.050 14:51:30 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:52.050 14:51:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:52.050 14:51:30 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:18:52.050 14:51:30 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:52.050 14:51:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:52.050 14:51:30 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:18:52.050 14:51:30 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:18:52.619 14:51:30 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:18:52.619 14:51:30 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:18:52.619 14:51:30 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:18:52.619 14:51:30 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:52.619 14:51:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:52.619 14:51:30 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:18:52.619 14:51:30 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:52.619 14:51:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:52.619 14:51:30 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:18:52.619 ' 00:18:53.996 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:18:53.996 14:51:31 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:18:53.996 14:51:31 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:53.996 14:51:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:53.996 14:51:31 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:18:53.996 14:51:31 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:53.996 14:51:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:53.996 14:51:31 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:18:53.996 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:18:53.996 ' 00:18:55.389 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:18:55.389 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:18:55.389 14:51:33 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:18:55.389 14:51:33 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:55.389 14:51:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:55.389 14:51:33 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 91034 00:18:55.389 14:51:33 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 91034 ']' 00:18:55.389 14:51:33 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 91034 00:18:55.389 14:51:33 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:18:55.389 14:51:33 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:55.389 14:51:33 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91034 00:18:55.389 14:51:33 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:55.389 14:51:33 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:55.389 14:51:33 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91034' 00:18:55.389 killing process with pid 91034 00:18:55.389 14:51:33 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 91034 00:18:55.389 14:51:33 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 91034 00:18:57.927 14:51:35 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:18:57.927 14:51:35 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 91034 ']' 00:18:57.927 14:51:35 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 91034 00:18:57.927 14:51:35 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 91034 ']' 00:18:57.927 14:51:35 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 91034 00:18:57.927 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (91034) - No such process 00:18:57.927 14:51:35 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 91034 is not found' 00:18:57.927 Process with pid 91034 is not found 00:18:57.927 14:51:35 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:18:57.927 14:51:35 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:18:57.927 14:51:35 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:18:57.927 14:51:35 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:18:57.927 00:18:57.927 real 0m10.185s 00:18:57.927 user 0m20.953s 00:18:57.927 sys 0m1.140s 00:18:57.927 14:51:35 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:57.927 14:51:35 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:57.927 ************************************ 00:18:57.927 END TEST spdkcli_raid 00:18:57.927 ************************************ 00:18:57.927 14:51:35 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:57.927 14:51:35 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:57.927 14:51:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:57.927 14:51:35 -- common/autotest_common.sh@10 -- # set +x 00:18:57.927 ************************************ 00:18:57.927 START TEST blockdev_raid5f 00:18:57.927 ************************************ 00:18:57.927 14:51:35 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:58.188 * Looking for test storage... 00:18:58.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:58.188 14:51:36 blockdev_raid5f -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:58.188 14:51:36 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lcov --version 00:18:58.188 14:51:36 blockdev_raid5f -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:58.188 14:51:36 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:58.188 14:51:36 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:58.188 14:51:36 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:58.188 14:51:36 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:58.188 14:51:36 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:18:58.188 14:51:36 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:18:58.188 14:51:36 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:18:58.188 14:51:36 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:18:58.188 14:51:36 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:18:58.188 14:51:36 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:18:58.188 14:51:36 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:18:58.188 14:51:36 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:58.188 14:51:36 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:18:58.188 14:51:36 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:18:58.188 14:51:36 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:58.188 14:51:36 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:58.188 14:51:36 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:18:58.188 14:51:36 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:18:58.188 14:51:36 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:58.188 14:51:36 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:18:58.188 14:51:36 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:18:58.188 14:51:36 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:18:58.188 14:51:36 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:18:58.188 14:51:36 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:58.188 14:51:36 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:18:58.188 14:51:36 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:18:58.188 14:51:36 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:58.188 14:51:36 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:58.188 14:51:36 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:18:58.188 14:51:36 blockdev_raid5f -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:58.188 14:51:36 blockdev_raid5f -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:58.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.188 --rc genhtml_branch_coverage=1 00:18:58.188 --rc genhtml_function_coverage=1 00:18:58.188 --rc genhtml_legend=1 00:18:58.188 --rc geninfo_all_blocks=1 00:18:58.188 --rc geninfo_unexecuted_blocks=1 00:18:58.188 00:18:58.188 ' 00:18:58.188 14:51:36 blockdev_raid5f -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:58.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.188 --rc genhtml_branch_coverage=1 00:18:58.188 --rc genhtml_function_coverage=1 00:18:58.188 --rc genhtml_legend=1 00:18:58.188 --rc geninfo_all_blocks=1 00:18:58.188 --rc geninfo_unexecuted_blocks=1 00:18:58.188 00:18:58.188 ' 00:18:58.188 14:51:36 blockdev_raid5f -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:58.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.188 --rc genhtml_branch_coverage=1 00:18:58.188 --rc genhtml_function_coverage=1 00:18:58.188 --rc genhtml_legend=1 00:18:58.188 --rc geninfo_all_blocks=1 00:18:58.188 --rc geninfo_unexecuted_blocks=1 00:18:58.188 00:18:58.188 ' 00:18:58.188 14:51:36 blockdev_raid5f -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:58.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.188 --rc genhtml_branch_coverage=1 00:18:58.188 --rc genhtml_function_coverage=1 00:18:58.188 --rc genhtml_legend=1 00:18:58.188 --rc geninfo_all_blocks=1 00:18:58.188 --rc geninfo_unexecuted_blocks=1 00:18:58.188 00:18:58.188 ' 00:18:58.188 14:51:36 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:58.188 14:51:36 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:18:58.188 14:51:36 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:58.188 14:51:36 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:58.188 14:51:36 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:58.188 14:51:36 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:58.188 14:51:36 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:58.188 14:51:36 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:58.188 14:51:36 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:18:58.188 14:51:36 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:18:58.188 14:51:36 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:18:58.188 14:51:36 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:18:58.188 14:51:36 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:18:58.188 14:51:36 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:18:58.188 14:51:36 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:18:58.188 14:51:36 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:18:58.188 14:51:36 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:18:58.188 14:51:36 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:18:58.188 14:51:36 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:18:58.188 14:51:36 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:18:58.188 14:51:36 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:18:58.188 14:51:36 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:18:58.188 14:51:36 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:18:58.188 14:51:36 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:18:58.188 14:51:36 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=91315 00:18:58.188 14:51:36 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:58.188 14:51:36 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:58.188 14:51:36 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 91315 00:18:58.188 14:51:36 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 91315 ']' 00:18:58.188 14:51:36 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.188 14:51:36 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:58.188 14:51:36 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.188 14:51:36 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:58.188 14:51:36 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:58.188 [2024-12-09 14:51:36.288421] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:18:58.188 [2024-12-09 14:51:36.288540] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91315 ] 00:18:58.448 [2024-12-09 14:51:36.463687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.708 [2024-12-09 14:51:36.580159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.647 14:51:37 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:59.647 14:51:37 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:18:59.647 14:51:37 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:18:59.647 14:51:37 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:18:59.647 14:51:37 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:18:59.647 14:51:37 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.647 14:51:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:59.647 Malloc0 00:18:59.647 Malloc1 00:18:59.647 Malloc2 00:18:59.647 14:51:37 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.647 14:51:37 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:18:59.647 14:51:37 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.647 14:51:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:59.647 14:51:37 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.647 14:51:37 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:18:59.647 14:51:37 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:18:59.647 14:51:37 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.647 14:51:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:59.648 14:51:37 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.648 14:51:37 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:18:59.648 14:51:37 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.648 14:51:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:59.648 14:51:37 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.648 14:51:37 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:59.648 14:51:37 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.648 14:51:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:59.648 14:51:37 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.648 14:51:37 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:18:59.648 14:51:37 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:18:59.648 14:51:37 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:18:59.648 14:51:37 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.648 14:51:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:59.648 14:51:37 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.648 14:51:37 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:18:59.648 14:51:37 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:18:59.648 14:51:37 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "f21e0c20-7a10-43cd-8556-a99d03140cf7"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "f21e0c20-7a10-43cd-8556-a99d03140cf7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "f21e0c20-7a10-43cd-8556-a99d03140cf7",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "9bfdaa28-6f37-45ea-84bc-2ef7394c0e6b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "810deb2d-431c-4d9b-922f-b9836d473022",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "c5dac072-ad12-4e9f-93f4-99a067520323",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:18:59.648 14:51:37 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:18:59.648 14:51:37 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:18:59.648 14:51:37 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:18:59.648 14:51:37 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 91315 00:18:59.648 14:51:37 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 91315 ']' 00:18:59.648 14:51:37 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 91315 00:18:59.648 14:51:37 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:18:59.648 14:51:37 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:59.648 14:51:37 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91315 00:18:59.648 killing process with pid 91315 00:18:59.648 14:51:37 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:59.648 14:51:37 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:59.648 14:51:37 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91315' 00:18:59.648 14:51:37 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 91315 00:18:59.648 14:51:37 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 91315 00:19:02.942 14:51:40 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:02.942 14:51:40 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:02.942 14:51:40 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:02.942 14:51:40 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:02.942 14:51:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:02.942 ************************************ 00:19:02.942 START TEST bdev_hello_world 00:19:02.942 ************************************ 00:19:02.942 14:51:40 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:02.942 [2024-12-09 14:51:40.515505] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:19:02.942 [2024-12-09 14:51:40.515737] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91383 ] 00:19:02.942 [2024-12-09 14:51:40.687241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.942 [2024-12-09 14:51:40.806857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.511 [2024-12-09 14:51:41.349231] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:03.511 [2024-12-09 14:51:41.349378] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:19:03.511 [2024-12-09 14:51:41.349413] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:03.511 [2024-12-09 14:51:41.350034] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:03.511 [2024-12-09 14:51:41.350223] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:03.511 [2024-12-09 14:51:41.350282] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:03.511 [2024-12-09 14:51:41.350374] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:03.511 00:19:03.511 [2024-12-09 14:51:41.350430] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:04.892 00:19:04.892 real 0m2.312s 00:19:04.892 user 0m1.952s 00:19:04.892 sys 0m0.238s 00:19:04.892 14:51:42 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:04.892 ************************************ 00:19:04.892 END TEST bdev_hello_world 00:19:04.892 ************************************ 00:19:04.892 14:51:42 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:04.892 14:51:42 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:19:04.892 14:51:42 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:04.892 14:51:42 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:04.892 14:51:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:04.892 ************************************ 00:19:04.892 START TEST bdev_bounds 00:19:04.892 ************************************ 00:19:04.892 14:51:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:19:04.892 Process bdevio pid: 91425 00:19:04.892 14:51:42 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=91425 00:19:04.892 14:51:42 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:04.892 14:51:42 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:04.892 14:51:42 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 91425' 00:19:04.892 14:51:42 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 91425 00:19:04.892 14:51:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 91425 ']' 00:19:04.892 14:51:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.892 14:51:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:04.892 14:51:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.892 14:51:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:04.892 14:51:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:04.892 [2024-12-09 14:51:42.890950] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:19:04.892 [2024-12-09 14:51:42.891162] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91425 ] 00:19:05.152 [2024-12-09 14:51:43.044925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:05.152 [2024-12-09 14:51:43.162099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:05.152 [2024-12-09 14:51:43.162240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.152 [2024-12-09 14:51:43.162276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:05.721 14:51:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:05.721 14:51:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:19:05.721 14:51:43 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:05.980 I/O targets: 00:19:05.980 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:19:05.980 00:19:05.980 00:19:05.980 CUnit - A unit testing framework for C - Version 2.1-3 00:19:05.980 http://cunit.sourceforge.net/ 00:19:05.980 00:19:05.980 00:19:05.980 Suite: bdevio tests on: raid5f 00:19:05.980 Test: blockdev write read block ...passed 00:19:05.980 Test: blockdev write zeroes read block ...passed 00:19:05.980 Test: blockdev write zeroes read no split ...passed 00:19:05.980 Test: blockdev write zeroes read split ...passed 00:19:05.980 Test: blockdev write zeroes read split partial ...passed 00:19:05.980 Test: blockdev reset ...passed 00:19:05.980 Test: blockdev write read 8 blocks ...passed 00:19:05.980 Test: blockdev write read size > 128k ...passed 00:19:05.980 Test: blockdev write read invalid size ...passed 00:19:05.980 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:05.980 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:05.980 Test: blockdev write read max offset ...passed 00:19:05.980 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:05.980 Test: blockdev writev readv 8 blocks ...passed 00:19:05.980 Test: blockdev writev readv 30 x 1block ...passed 00:19:05.980 Test: blockdev writev readv block ...passed 00:19:05.980 Test: blockdev writev readv size > 128k ...passed 00:19:05.980 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:06.240 Test: blockdev comparev and writev ...passed 00:19:06.240 Test: blockdev nvme passthru rw ...passed 00:19:06.240 Test: blockdev nvme passthru vendor specific ...passed 00:19:06.240 Test: blockdev nvme admin passthru ...passed 00:19:06.240 Test: blockdev copy ...passed 00:19:06.240 00:19:06.240 Run Summary: Type Total Ran Passed Failed Inactive 00:19:06.240 suites 1 1 n/a 0 0 00:19:06.240 tests 23 23 23 0 0 00:19:06.240 asserts 130 130 130 0 n/a 00:19:06.240 00:19:06.240 Elapsed time = 0.629 seconds 00:19:06.240 0 00:19:06.240 14:51:44 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 91425 00:19:06.240 14:51:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 91425 ']' 00:19:06.240 14:51:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 91425 00:19:06.240 14:51:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:19:06.240 14:51:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:06.240 14:51:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91425 00:19:06.240 14:51:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:06.240 14:51:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:06.240 14:51:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91425' 00:19:06.240 killing process with pid 91425 00:19:06.240 14:51:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 91425 00:19:06.240 14:51:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 91425 00:19:07.618 14:51:45 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:07.618 00:19:07.618 real 0m2.884s 00:19:07.618 user 0m7.297s 00:19:07.618 sys 0m0.361s 00:19:07.618 14:51:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:07.618 14:51:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:07.618 ************************************ 00:19:07.618 END TEST bdev_bounds 00:19:07.618 ************************************ 00:19:07.877 14:51:45 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:07.877 14:51:45 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:07.877 14:51:45 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:07.877 14:51:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:07.877 ************************************ 00:19:07.877 START TEST bdev_nbd 00:19:07.877 ************************************ 00:19:07.877 14:51:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:07.877 14:51:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:07.877 14:51:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:07.877 14:51:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:07.877 14:51:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:07.877 14:51:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:19:07.877 14:51:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:07.877 14:51:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:19:07.877 14:51:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:07.877 14:51:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:07.877 14:51:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:07.877 14:51:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:19:07.877 14:51:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:19:07.877 14:51:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:07.877 14:51:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:19:07.877 14:51:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:07.877 14:51:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=91485 00:19:07.877 14:51:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:07.877 14:51:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:07.877 14:51:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 91485 /var/tmp/spdk-nbd.sock 00:19:07.877 14:51:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 91485 ']' 00:19:07.877 14:51:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:07.877 14:51:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:07.877 14:51:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:07.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:07.877 14:51:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:07.877 14:51:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:07.877 [2024-12-09 14:51:45.866224] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:19:07.877 [2024-12-09 14:51:45.866423] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:08.137 [2024-12-09 14:51:46.031032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.137 [2024-12-09 14:51:46.170851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.706 14:51:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:08.706 14:51:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:19:08.706 14:51:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:19:08.706 14:51:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:08.706 14:51:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:19:08.706 14:51:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:08.706 14:51:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:19:08.706 14:51:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:08.707 14:51:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:19:08.707 14:51:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:08.707 14:51:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:08.707 14:51:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:08.707 14:51:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:08.707 14:51:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:08.707 14:51:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:19:08.966 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:08.966 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:08.966 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:08.966 14:51:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:08.966 14:51:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:08.966 14:51:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:08.966 14:51:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:08.966 14:51:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:08.966 14:51:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:08.966 14:51:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:08.966 14:51:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:08.966 14:51:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:08.966 1+0 records in 00:19:08.966 1+0 records out 00:19:08.966 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000415134 s, 9.9 MB/s 00:19:08.966 14:51:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:08.966 14:51:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:08.966 14:51:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:08.966 14:51:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:08.966 14:51:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:08.966 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:08.966 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:08.966 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:09.231 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:09.231 { 00:19:09.231 "nbd_device": "/dev/nbd0", 00:19:09.231 "bdev_name": "raid5f" 00:19:09.231 } 00:19:09.231 ]' 00:19:09.231 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:09.231 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:09.231 { 00:19:09.231 "nbd_device": "/dev/nbd0", 00:19:09.231 "bdev_name": "raid5f" 00:19:09.231 } 00:19:09.231 ]' 00:19:09.231 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:09.231 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:09.231 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:09.231 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:09.231 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:09.231 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:09.231 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:09.231 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:09.501 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:09.501 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:09.501 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:09.501 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:09.501 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:09.501 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:09.501 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:09.501 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:09.501 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:09.501 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:09.501 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:09.772 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:09.772 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:09.772 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:09.772 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:09.772 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:09.772 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:09.772 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:09.772 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:09.772 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:09.772 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:09.772 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:09.772 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:09.772 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:09.772 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:09.772 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:19:09.772 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:09.772 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:19:09.772 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:09.772 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:09.772 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:09.772 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:19:09.772 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:09.772 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:09.772 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:09.772 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:09.772 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:09.773 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:09.773 14:51:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:19:10.032 /dev/nbd0 00:19:10.032 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:10.032 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:10.032 14:51:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:10.032 14:51:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:10.032 14:51:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:10.032 14:51:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:10.032 14:51:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:10.032 14:51:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:10.032 14:51:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:10.032 14:51:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:10.032 14:51:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:10.032 1+0 records in 00:19:10.032 1+0 records out 00:19:10.032 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000511832 s, 8.0 MB/s 00:19:10.032 14:51:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:10.032 14:51:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:10.033 14:51:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:10.033 14:51:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:10.033 14:51:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:10.033 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:10.033 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:10.033 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:10.033 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:10.033 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:10.292 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:10.292 { 00:19:10.292 "nbd_device": "/dev/nbd0", 00:19:10.292 "bdev_name": "raid5f" 00:19:10.292 } 00:19:10.292 ]' 00:19:10.292 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:10.292 { 00:19:10.292 "nbd_device": "/dev/nbd0", 00:19:10.292 "bdev_name": "raid5f" 00:19:10.292 } 00:19:10.292 ]' 00:19:10.292 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:10.292 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:19:10.292 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:10.292 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:19:10.292 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:19:10.292 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:19:10.292 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:19:10.292 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:19:10.292 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:19:10.292 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:10.292 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:10.292 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:10.292 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:10.292 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:10.292 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:10.292 256+0 records in 00:19:10.292 256+0 records out 00:19:10.292 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134206 s, 78.1 MB/s 00:19:10.292 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:10.292 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:10.292 256+0 records in 00:19:10.292 256+0 records out 00:19:10.292 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0336485 s, 31.2 MB/s 00:19:10.292 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:19:10.292 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:10.292 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:10.292 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:10.292 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:10.292 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:10.292 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:10.292 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:10.292 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:10.292 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:10.292 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:10.293 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:10.293 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:10.293 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:10.293 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:10.293 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:10.293 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:10.552 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:10.552 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:10.552 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:10.552 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:10.552 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:10.552 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:10.552 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:10.552 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:10.552 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:10.552 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:10.552 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:10.811 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:10.811 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:10.811 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:10.811 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:10.811 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:10.811 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:10.811 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:10.811 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:10.811 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:10.811 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:10.811 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:10.811 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:10.811 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:10.811 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:10.812 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:10.812 14:51:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:11.071 malloc_lvol_verify 00:19:11.071 14:51:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:11.330 f54994c5-06e4-4d24-bc59-d2715976c77c 00:19:11.330 14:51:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:11.590 32a3d87e-4591-43b0-ad79-4474a12e02cd 00:19:11.590 14:51:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:11.590 /dev/nbd0 00:19:11.590 14:51:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:11.590 14:51:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:11.590 14:51:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:11.590 14:51:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:11.590 14:51:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:11.590 mke2fs 1.47.0 (5-Feb-2023) 00:19:11.590 Discarding device blocks: 0/4096 done 00:19:11.590 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:11.590 00:19:11.590 Allocating group tables: 0/1 done 00:19:11.590 Writing inode tables: 0/1 done 00:19:11.590 Creating journal (1024 blocks): done 00:19:11.590 Writing superblocks and filesystem accounting information: 0/1 done 00:19:11.590 00:19:11.590 14:51:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:11.590 14:51:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:11.590 14:51:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:11.590 14:51:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:11.590 14:51:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:11.590 14:51:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:11.590 14:51:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:11.850 14:51:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:11.850 14:51:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:11.850 14:51:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:11.850 14:51:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:11.850 14:51:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:11.850 14:51:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:11.850 14:51:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:11.850 14:51:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:11.850 14:51:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 91485 00:19:11.850 14:51:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 91485 ']' 00:19:11.850 14:51:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 91485 00:19:11.850 14:51:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:19:11.850 14:51:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:11.850 14:51:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91485 00:19:11.850 killing process with pid 91485 00:19:11.850 14:51:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:11.850 14:51:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:11.850 14:51:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91485' 00:19:11.850 14:51:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 91485 00:19:11.850 14:51:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 91485 00:19:13.759 14:51:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:13.759 00:19:13.759 real 0m5.778s 00:19:13.759 user 0m7.592s 00:19:13.759 sys 0m1.405s 00:19:13.759 14:51:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:13.759 14:51:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:13.759 ************************************ 00:19:13.759 END TEST bdev_nbd 00:19:13.759 ************************************ 00:19:13.759 14:51:51 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:19:13.759 14:51:51 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:19:13.759 14:51:51 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:19:13.759 14:51:51 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:19:13.759 14:51:51 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:13.759 14:51:51 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:13.759 14:51:51 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:13.759 ************************************ 00:19:13.759 START TEST bdev_fio 00:19:13.759 ************************************ 00:19:13.759 14:51:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:19:13.759 14:51:51 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:19:13.759 14:51:51 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:19:13.759 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:19:13.759 14:51:51 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:19:13.759 14:51:51 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:19:13.759 14:51:51 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:19:13.759 14:51:51 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:19:13.759 14:51:51 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:19:13.759 14:51:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:13.759 14:51:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:19:13.759 14:51:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:19:13.759 14:51:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:13.759 14:51:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:13.759 14:51:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:13.759 14:51:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:19:13.759 14:51:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:13.759 14:51:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:13.759 14:51:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:13.759 14:51:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:19:13.759 14:51:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:19:13.759 14:51:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:19:13.759 14:51:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:19:13.759 14:51:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:19:13.759 14:51:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:19:13.759 14:51:51 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:13.759 14:51:51 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:19:13.759 14:51:51 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:19:13.759 14:51:51 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:19:13.759 14:51:51 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:13.759 14:51:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:19:13.759 14:51:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:13.759 14:51:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:13.759 ************************************ 00:19:13.759 START TEST bdev_fio_rw_verify 00:19:13.759 ************************************ 00:19:13.759 14:51:51 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:13.759 14:51:51 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:13.759 14:51:51 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:13.759 14:51:51 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:13.759 14:51:51 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:13.759 14:51:51 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:13.759 14:51:51 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:19:13.759 14:51:51 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:13.760 14:51:51 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:13.760 14:51:51 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:13.760 14:51:51 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:13.760 14:51:51 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:19:13.760 14:51:51 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:13.760 14:51:51 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:13.760 14:51:51 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:19:13.760 14:51:51 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:13.760 14:51:51 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:14.020 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:14.020 fio-3.35 00:19:14.020 Starting 1 thread 00:19:26.235 00:19:26.235 job_raid5f: (groupid=0, jobs=1): err= 0: pid=91691: Mon Dec 9 14:52:03 2024 00:19:26.235 read: IOPS=10.8k, BW=42.0MiB/s (44.0MB/s)(420MiB/10001msec) 00:19:26.235 slat (nsec): min=17940, max=80272, avg=22331.24, stdev=3254.40 00:19:26.235 clat (usec): min=13, max=413, avg=150.29, stdev=54.54 00:19:26.235 lat (usec): min=35, max=441, avg=172.62, stdev=55.33 00:19:26.235 clat percentiles (usec): 00:19:26.235 | 50.000th=[ 153], 99.000th=[ 269], 99.900th=[ 302], 99.990th=[ 326], 00:19:26.235 | 99.999th=[ 359] 00:19:26.235 write: IOPS=11.2k, BW=43.9MiB/s (46.1MB/s)(434MiB/9877msec); 0 zone resets 00:19:26.235 slat (usec): min=7, max=238, avg=18.37, stdev= 4.37 00:19:26.235 clat (usec): min=33, max=1689, avg=343.56, stdev=53.42 00:19:26.235 lat (usec): min=48, max=1928, avg=361.94, stdev=55.05 00:19:26.235 clat percentiles (usec): 00:19:26.235 | 50.000th=[ 343], 99.000th=[ 474], 99.900th=[ 611], 99.990th=[ 996], 00:19:26.235 | 99.999th=[ 1582] 00:19:26.235 bw ( KiB/s): min=39680, max=47312, per=98.80%, avg=44462.00, stdev=1977.80, samples=19 00:19:26.235 iops : min= 9920, max=11828, avg=11115.47, stdev=494.47, samples=19 00:19:26.235 lat (usec) : 20=0.01%, 50=0.01%, 100=10.72%, 250=38.13%, 500=50.95% 00:19:26.235 lat (usec) : 750=0.16%, 1000=0.02% 00:19:26.235 lat (msec) : 2=0.01% 00:19:26.235 cpu : usr=98.84%, sys=0.43%, ctx=32, majf=0, minf=8968 00:19:26.235 IO depths : 1=7.7%, 2=20.0%, 4=55.0%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:26.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.235 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.235 issued rwts: total=107532,111124,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.235 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:26.235 00:19:26.235 Run status group 0 (all jobs): 00:19:26.235 READ: bw=42.0MiB/s (44.0MB/s), 42.0MiB/s-42.0MiB/s (44.0MB/s-44.0MB/s), io=420MiB (440MB), run=10001-10001msec 00:19:26.235 WRITE: bw=43.9MiB/s (46.1MB/s), 43.9MiB/s-43.9MiB/s (46.1MB/s-46.1MB/s), io=434MiB (455MB), run=9877-9877msec 00:19:26.804 ----------------------------------------------------- 00:19:26.804 Suppressions used: 00:19:26.804 count bytes template 00:19:26.804 1 7 /usr/src/fio/parse.c 00:19:26.804 263 25248 /usr/src/fio/iolog.c 00:19:26.804 1 8 libtcmalloc_minimal.so 00:19:26.804 1 904 libcrypto.so 00:19:26.804 ----------------------------------------------------- 00:19:26.804 00:19:26.804 00:19:26.804 real 0m13.074s 00:19:26.804 user 0m13.283s 00:19:26.804 sys 0m0.860s 00:19:26.804 14:52:04 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:26.804 14:52:04 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:19:26.804 ************************************ 00:19:26.804 END TEST bdev_fio_rw_verify 00:19:26.804 ************************************ 00:19:26.804 14:52:04 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:19:26.804 14:52:04 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:26.804 14:52:04 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:19:26.804 14:52:04 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:26.804 14:52:04 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:19:26.804 14:52:04 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:19:26.804 14:52:04 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:26.804 14:52:04 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:26.804 14:52:04 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:26.804 14:52:04 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:19:26.804 14:52:04 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:26.804 14:52:04 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:26.804 14:52:04 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:26.804 14:52:04 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:19:26.804 14:52:04 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:19:26.804 14:52:04 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:19:26.804 14:52:04 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:19:26.804 14:52:04 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "f21e0c20-7a10-43cd-8556-a99d03140cf7"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "f21e0c20-7a10-43cd-8556-a99d03140cf7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "f21e0c20-7a10-43cd-8556-a99d03140cf7",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "9bfdaa28-6f37-45ea-84bc-2ef7394c0e6b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "810deb2d-431c-4d9b-922f-b9836d473022",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "c5dac072-ad12-4e9f-93f4-99a067520323",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:27.063 14:52:04 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:19:27.063 14:52:04 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:27.063 14:52:04 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:19:27.064 /home/vagrant/spdk_repo/spdk 00:19:27.064 14:52:04 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:19:27.064 14:52:04 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:19:27.064 00:19:27.064 real 0m13.360s 00:19:27.064 user 0m13.411s 00:19:27.064 sys 0m0.992s 00:19:27.064 14:52:04 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:27.064 14:52:04 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:27.064 ************************************ 00:19:27.064 END TEST bdev_fio 00:19:27.064 ************************************ 00:19:27.064 14:52:05 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:27.064 14:52:05 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:27.064 14:52:05 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:27.064 14:52:05 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:27.064 14:52:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:27.064 ************************************ 00:19:27.064 START TEST bdev_verify 00:19:27.064 ************************************ 00:19:27.064 14:52:05 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:27.064 [2024-12-09 14:52:05.112026] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:19:27.064 [2024-12-09 14:52:05.112140] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91856 ] 00:19:27.322 [2024-12-09 14:52:05.286273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:27.322 [2024-12-09 14:52:05.428178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.322 [2024-12-09 14:52:05.428217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.257 Running I/O for 5 seconds... 00:19:30.137 10045.00 IOPS, 39.24 MiB/s [2024-12-09T14:52:09.195Z] 10055.00 IOPS, 39.28 MiB/s [2024-12-09T14:52:10.130Z] 10055.67 IOPS, 39.28 MiB/s [2024-12-09T14:52:11.065Z] 10054.00 IOPS, 39.27 MiB/s [2024-12-09T14:52:11.324Z] 10054.60 IOPS, 39.28 MiB/s 00:19:33.202 Latency(us) 00:19:33.202 [2024-12-09T14:52:11.324Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.202 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:33.202 Verification LBA range: start 0x0 length 0x2000 00:19:33.202 raid5f : 5.03 5819.59 22.73 0.00 0.00 33144.35 126.10 26557.82 00:19:33.202 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:33.202 Verification LBA range: start 0x2000 length 0x2000 00:19:33.202 raid5f : 5.03 4221.53 16.49 0.00 0.00 45674.54 125.21 33197.28 00:19:33.202 [2024-12-09T14:52:11.324Z] =================================================================================================================== 00:19:33.202 [2024-12-09T14:52:11.324Z] Total : 10041.12 39.22 0.00 0.00 38414.51 125.21 33197.28 00:19:34.580 00:19:34.580 real 0m7.583s 00:19:34.580 user 0m13.950s 00:19:34.580 sys 0m0.354s 00:19:34.580 14:52:12 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:34.580 14:52:12 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:34.580 ************************************ 00:19:34.580 END TEST bdev_verify 00:19:34.580 ************************************ 00:19:34.580 14:52:12 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:34.580 14:52:12 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:34.580 14:52:12 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:34.580 14:52:12 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:34.580 ************************************ 00:19:34.580 START TEST bdev_verify_big_io 00:19:34.580 ************************************ 00:19:34.580 14:52:12 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:34.858 [2024-12-09 14:52:12.768763] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:19:34.858 [2024-12-09 14:52:12.768910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91949 ] 00:19:34.858 [2024-12-09 14:52:12.948776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:35.137 [2024-12-09 14:52:13.087693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.137 [2024-12-09 14:52:13.087728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.705 Running I/O for 5 seconds... 00:19:38.023 633.00 IOPS, 39.56 MiB/s [2024-12-09T14:52:17.084Z] 696.50 IOPS, 43.53 MiB/s [2024-12-09T14:52:18.023Z] 676.67 IOPS, 42.29 MiB/s [2024-12-09T14:52:18.963Z] 698.00 IOPS, 43.62 MiB/s [2024-12-09T14:52:19.222Z] 710.20 IOPS, 44.39 MiB/s 00:19:41.100 Latency(us) 00:19:41.100 [2024-12-09T14:52:19.222Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:41.100 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:41.100 Verification LBA range: start 0x0 length 0x200 00:19:41.100 raid5f : 5.24 399.50 24.97 0.00 0.00 7947640.87 162.77 349830.60 00:19:41.100 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:41.100 Verification LBA range: start 0x200 length 0x200 00:19:41.100 raid5f : 5.27 324.88 20.31 0.00 0.00 9673014.48 200.33 424925.12 00:19:41.100 [2024-12-09T14:52:19.222Z] =================================================================================================================== 00:19:41.100 [2024-12-09T14:52:19.222Z] Total : 724.38 45.27 0.00 0.00 8723991.01 162.77 424925.12 00:19:42.481 00:19:42.481 real 0m7.806s 00:19:42.481 user 0m14.378s 00:19:42.481 sys 0m0.357s 00:19:42.481 14:52:20 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:42.481 14:52:20 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:42.481 ************************************ 00:19:42.481 END TEST bdev_verify_big_io 00:19:42.481 ************************************ 00:19:42.481 14:52:20 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:42.481 14:52:20 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:42.481 14:52:20 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:42.481 14:52:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:42.481 ************************************ 00:19:42.481 START TEST bdev_write_zeroes 00:19:42.481 ************************************ 00:19:42.481 14:52:20 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:42.741 [2024-12-09 14:52:20.649217] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:19:42.741 [2024-12-09 14:52:20.649342] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92053 ] 00:19:42.741 [2024-12-09 14:52:20.817676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.001 [2024-12-09 14:52:20.952513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.571 Running I/O for 1 seconds... 00:19:44.510 27279.00 IOPS, 106.56 MiB/s 00:19:44.510 Latency(us) 00:19:44.510 [2024-12-09T14:52:22.632Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.510 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:44.510 raid5f : 1.01 27243.56 106.42 0.00 0.00 4682.31 1438.07 7269.06 00:19:44.510 [2024-12-09T14:52:22.632Z] =================================================================================================================== 00:19:44.510 [2024-12-09T14:52:22.632Z] Total : 27243.56 106.42 0.00 0.00 4682.31 1438.07 7269.06 00:19:46.418 00:19:46.418 real 0m3.522s 00:19:46.418 user 0m3.028s 00:19:46.418 sys 0m0.366s 00:19:46.418 14:52:24 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:46.418 14:52:24 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:19:46.418 ************************************ 00:19:46.418 END TEST bdev_write_zeroes 00:19:46.418 ************************************ 00:19:46.418 14:52:24 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:46.419 14:52:24 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:46.419 14:52:24 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:46.419 14:52:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:46.419 ************************************ 00:19:46.419 START TEST bdev_json_nonenclosed 00:19:46.419 ************************************ 00:19:46.419 14:52:24 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:46.419 [2024-12-09 14:52:24.234049] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:19:46.419 [2024-12-09 14:52:24.234155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92106 ] 00:19:46.419 [2024-12-09 14:52:24.408833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.678 [2024-12-09 14:52:24.541030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.678 [2024-12-09 14:52:24.541154] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:19:46.678 [2024-12-09 14:52:24.541184] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:46.678 [2024-12-09 14:52:24.541194] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:46.679 00:19:46.679 real 0m0.650s 00:19:46.679 user 0m0.420s 00:19:46.679 sys 0m0.127s 00:19:46.679 14:52:24 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:46.679 14:52:24 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:19:46.679 ************************************ 00:19:46.679 END TEST bdev_json_nonenclosed 00:19:46.939 ************************************ 00:19:46.939 14:52:24 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:46.939 14:52:24 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:46.939 14:52:24 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:46.939 14:52:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:46.939 ************************************ 00:19:46.939 START TEST bdev_json_nonarray 00:19:46.939 ************************************ 00:19:46.939 14:52:24 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:46.939 [2024-12-09 14:52:24.950711] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:19:46.939 [2024-12-09 14:52:24.950814] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92137 ] 00:19:47.198 [2024-12-09 14:52:25.125277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.198 [2024-12-09 14:52:25.265899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:47.198 [2024-12-09 14:52:25.266021] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:19:47.198 [2024-12-09 14:52:25.266041] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:47.198 [2024-12-09 14:52:25.266061] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:47.457 00:19:47.457 real 0m0.668s 00:19:47.457 user 0m0.439s 00:19:47.457 sys 0m0.124s 00:19:47.457 14:52:25 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:47.457 14:52:25 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:19:47.457 ************************************ 00:19:47.457 END TEST bdev_json_nonarray 00:19:47.457 ************************************ 00:19:47.716 14:52:25 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:19:47.716 14:52:25 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:19:47.716 14:52:25 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:19:47.716 14:52:25 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:19:47.716 14:52:25 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:19:47.716 14:52:25 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:19:47.716 14:52:25 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:47.716 14:52:25 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:19:47.716 14:52:25 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:19:47.716 14:52:25 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:19:47.716 14:52:25 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:19:47.716 00:19:47.716 real 0m49.666s 00:19:47.716 user 1m6.983s 00:19:47.716 sys 0m5.412s 00:19:47.716 14:52:25 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:47.716 14:52:25 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:47.716 ************************************ 00:19:47.716 END TEST blockdev_raid5f 00:19:47.716 ************************************ 00:19:47.716 14:52:25 -- spdk/autotest.sh@194 -- # uname -s 00:19:47.716 14:52:25 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:19:47.716 14:52:25 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:47.716 14:52:25 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:47.716 14:52:25 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:19:47.716 14:52:25 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:47.717 14:52:25 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:47.717 14:52:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:47.717 14:52:25 -- common/autotest_common.sh@10 -- # set +x 00:19:47.717 14:52:25 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:47.717 14:52:25 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:19:47.717 14:52:25 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:19:47.717 14:52:25 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:47.717 14:52:25 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:47.717 14:52:25 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:19:47.717 14:52:25 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:19:47.717 14:52:25 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:19:47.717 14:52:25 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:47.717 14:52:25 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:19:47.717 14:52:25 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:19:47.717 14:52:25 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:19:47.717 14:52:25 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:19:47.717 14:52:25 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:19:47.717 14:52:25 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:19:47.717 14:52:25 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:19:47.717 14:52:25 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:19:47.717 14:52:25 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:19:47.717 14:52:25 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:19:47.717 14:52:25 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:19:47.717 14:52:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:47.717 14:52:25 -- common/autotest_common.sh@10 -- # set +x 00:19:47.717 14:52:25 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:19:47.717 14:52:25 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:19:47.717 14:52:25 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:19:47.717 14:52:25 -- common/autotest_common.sh@10 -- # set +x 00:19:50.257 INFO: APP EXITING 00:19:50.257 INFO: killing all VMs 00:19:50.257 INFO: killing vhost app 00:19:50.257 INFO: EXIT DONE 00:19:50.257 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:50.257 Waiting for block devices as requested 00:19:50.516 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:50.516 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:51.456 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:51.456 Cleaning 00:19:51.456 Removing: /var/run/dpdk/spdk0/config 00:19:51.456 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:19:51.456 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:19:51.456 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:19:51.456 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:19:51.456 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:19:51.456 Removing: /var/run/dpdk/spdk0/hugepage_info 00:19:51.456 Removing: /dev/shm/spdk_tgt_trace.pid58160 00:19:51.456 Removing: /var/run/dpdk/spdk0 00:19:51.456 Removing: /var/run/dpdk/spdk_pid57908 00:19:51.456 Removing: /var/run/dpdk/spdk_pid58160 00:19:51.456 Removing: /var/run/dpdk/spdk_pid58389 00:19:51.456 Removing: /var/run/dpdk/spdk_pid58504 00:19:51.456 Removing: /var/run/dpdk/spdk_pid58560 00:19:51.456 Removing: /var/run/dpdk/spdk_pid58699 00:19:51.456 Removing: /var/run/dpdk/spdk_pid58717 00:19:51.456 Removing: /var/run/dpdk/spdk_pid58927 00:19:51.456 Removing: /var/run/dpdk/spdk_pid59051 00:19:51.456 Removing: /var/run/dpdk/spdk_pid59163 00:19:51.456 Removing: /var/run/dpdk/spdk_pid59291 00:19:51.456 Removing: /var/run/dpdk/spdk_pid59404 00:19:51.456 Removing: /var/run/dpdk/spdk_pid59444 00:19:51.456 Removing: /var/run/dpdk/spdk_pid59486 00:19:51.456 Removing: /var/run/dpdk/spdk_pid59562 00:19:51.456 Removing: /var/run/dpdk/spdk_pid59679 00:19:51.456 Removing: /var/run/dpdk/spdk_pid60134 00:19:51.456 Removing: /var/run/dpdk/spdk_pid60209 00:19:51.456 Removing: /var/run/dpdk/spdk_pid60283 00:19:51.456 Removing: /var/run/dpdk/spdk_pid60310 00:19:51.456 Removing: /var/run/dpdk/spdk_pid60464 00:19:51.456 Removing: /var/run/dpdk/spdk_pid60480 00:19:51.456 Removing: /var/run/dpdk/spdk_pid60630 00:19:51.456 Removing: /var/run/dpdk/spdk_pid60650 00:19:51.456 Removing: /var/run/dpdk/spdk_pid60719 00:19:51.456 Removing: /var/run/dpdk/spdk_pid60743 00:19:51.456 Removing: /var/run/dpdk/spdk_pid60807 00:19:51.456 Removing: /var/run/dpdk/spdk_pid60825 00:19:51.456 Removing: /var/run/dpdk/spdk_pid61026 00:19:51.456 Removing: /var/run/dpdk/spdk_pid61062 00:19:51.717 Removing: /var/run/dpdk/spdk_pid61151 00:19:51.717 Removing: /var/run/dpdk/spdk_pid62500 00:19:51.717 Removing: /var/run/dpdk/spdk_pid62706 00:19:51.717 Removing: /var/run/dpdk/spdk_pid62852 00:19:51.717 Removing: /var/run/dpdk/spdk_pid63495 00:19:51.717 Removing: /var/run/dpdk/spdk_pid63701 00:19:51.717 Removing: /var/run/dpdk/spdk_pid63847 00:19:51.717 Removing: /var/run/dpdk/spdk_pid64490 00:19:51.717 Removing: /var/run/dpdk/spdk_pid64821 00:19:51.717 Removing: /var/run/dpdk/spdk_pid64964 00:19:51.717 Removing: /var/run/dpdk/spdk_pid66349 00:19:51.717 Removing: /var/run/dpdk/spdk_pid66602 00:19:51.717 Removing: /var/run/dpdk/spdk_pid66748 00:19:51.717 Removing: /var/run/dpdk/spdk_pid68133 00:19:51.717 Removing: /var/run/dpdk/spdk_pid68386 00:19:51.717 Removing: /var/run/dpdk/spdk_pid68532 00:19:51.717 Removing: /var/run/dpdk/spdk_pid69917 00:19:51.717 Removing: /var/run/dpdk/spdk_pid70363 00:19:51.717 Removing: /var/run/dpdk/spdk_pid70513 00:19:51.717 Removing: /var/run/dpdk/spdk_pid72005 00:19:51.717 Removing: /var/run/dpdk/spdk_pid72278 00:19:51.717 Removing: /var/run/dpdk/spdk_pid72425 00:19:51.717 Removing: /var/run/dpdk/spdk_pid73924 00:19:51.717 Removing: /var/run/dpdk/spdk_pid74194 00:19:51.717 Removing: /var/run/dpdk/spdk_pid74340 00:19:51.717 Removing: /var/run/dpdk/spdk_pid75835 00:19:51.717 Removing: /var/run/dpdk/spdk_pid76329 00:19:51.717 Removing: /var/run/dpdk/spdk_pid76475 00:19:51.717 Removing: /var/run/dpdk/spdk_pid76614 00:19:51.717 Removing: /var/run/dpdk/spdk_pid77042 00:19:51.717 Removing: /var/run/dpdk/spdk_pid77783 00:19:51.717 Removing: /var/run/dpdk/spdk_pid78159 00:19:51.717 Removing: /var/run/dpdk/spdk_pid78851 00:19:51.717 Removing: /var/run/dpdk/spdk_pid79304 00:19:51.717 Removing: /var/run/dpdk/spdk_pid80064 00:19:51.717 Removing: /var/run/dpdk/spdk_pid80473 00:19:51.717 Removing: /var/run/dpdk/spdk_pid82449 00:19:51.717 Removing: /var/run/dpdk/spdk_pid82894 00:19:51.717 Removing: /var/run/dpdk/spdk_pid83334 00:19:51.717 Removing: /var/run/dpdk/spdk_pid85426 00:19:51.717 Removing: /var/run/dpdk/spdk_pid85910 00:19:51.717 Removing: /var/run/dpdk/spdk_pid86430 00:19:51.717 Removing: /var/run/dpdk/spdk_pid87490 00:19:51.717 Removing: /var/run/dpdk/spdk_pid87818 00:19:51.717 Removing: /var/run/dpdk/spdk_pid88765 00:19:51.717 Removing: /var/run/dpdk/spdk_pid89088 00:19:51.717 Removing: /var/run/dpdk/spdk_pid90028 00:19:51.717 Removing: /var/run/dpdk/spdk_pid90356 00:19:51.717 Removing: /var/run/dpdk/spdk_pid91034 00:19:51.717 Removing: /var/run/dpdk/spdk_pid91315 00:19:51.717 Removing: /var/run/dpdk/spdk_pid91383 00:19:51.717 Removing: /var/run/dpdk/spdk_pid91425 00:19:51.717 Removing: /var/run/dpdk/spdk_pid91676 00:19:51.717 Removing: /var/run/dpdk/spdk_pid91856 00:19:51.717 Removing: /var/run/dpdk/spdk_pid91949 00:19:51.717 Removing: /var/run/dpdk/spdk_pid92053 00:19:51.717 Removing: /var/run/dpdk/spdk_pid92106 00:19:51.717 Removing: /var/run/dpdk/spdk_pid92137 00:19:51.717 Clean 00:19:51.977 14:52:29 -- common/autotest_common.sh@1453 -- # return 0 00:19:51.977 14:52:29 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:19:51.977 14:52:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:51.977 14:52:29 -- common/autotest_common.sh@10 -- # set +x 00:19:51.977 14:52:29 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:19:51.977 14:52:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:51.977 14:52:29 -- common/autotest_common.sh@10 -- # set +x 00:19:51.977 14:52:30 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:51.977 14:52:30 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:19:51.977 14:52:30 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:19:51.977 14:52:30 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:19:51.977 14:52:30 -- spdk/autotest.sh@398 -- # hostname 00:19:51.977 14:52:30 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:19:52.236 geninfo: WARNING: invalid characters removed from testname! 00:20:14.212 14:52:50 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:16.123 14:52:53 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:18.073 14:52:55 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:19.984 14:52:57 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:21.891 14:52:59 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:23.798 14:53:01 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:25.705 14:53:03 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:25.705 14:53:03 -- spdk/autorun.sh@1 -- $ timing_finish 00:20:25.705 14:53:03 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:20:25.705 14:53:03 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:25.705 14:53:03 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:20:25.705 14:53:03 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:25.705 + [[ -n 5424 ]] 00:20:25.705 + sudo kill 5424 00:20:25.715 [Pipeline] } 00:20:25.730 [Pipeline] // timeout 00:20:25.735 [Pipeline] } 00:20:25.747 [Pipeline] // stage 00:20:25.751 [Pipeline] } 00:20:25.761 [Pipeline] // catchError 00:20:25.767 [Pipeline] stage 00:20:25.769 [Pipeline] { (Stop VM) 00:20:25.782 [Pipeline] sh 00:20:26.064 + vagrant halt 00:20:28.600 ==> default: Halting domain... 00:20:36.759 [Pipeline] sh 00:20:37.042 + vagrant destroy -f 00:20:39.578 ==> default: Removing domain... 00:20:39.590 [Pipeline] sh 00:20:39.874 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:20:39.884 [Pipeline] } 00:20:39.898 [Pipeline] // stage 00:20:39.903 [Pipeline] } 00:20:39.918 [Pipeline] // dir 00:20:39.923 [Pipeline] } 00:20:39.938 [Pipeline] // wrap 00:20:39.944 [Pipeline] } 00:20:39.958 [Pipeline] // catchError 00:20:39.967 [Pipeline] stage 00:20:39.969 [Pipeline] { (Epilogue) 00:20:39.981 [Pipeline] sh 00:20:40.266 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:45.557 [Pipeline] catchError 00:20:45.559 [Pipeline] { 00:20:45.571 [Pipeline] sh 00:20:45.856 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:45.857 Artifacts sizes are good 00:20:45.866 [Pipeline] } 00:20:45.881 [Pipeline] // catchError 00:20:45.892 [Pipeline] archiveArtifacts 00:20:45.899 Archiving artifacts 00:20:46.046 [Pipeline] cleanWs 00:20:46.066 [WS-CLEANUP] Deleting project workspace... 00:20:46.066 [WS-CLEANUP] Deferred wipeout is used... 00:20:46.072 [WS-CLEANUP] done 00:20:46.074 [Pipeline] } 00:20:46.090 [Pipeline] // stage 00:20:46.095 [Pipeline] } 00:20:46.108 [Pipeline] // node 00:20:46.113 [Pipeline] End of Pipeline 00:20:46.166 Finished: SUCCESS